00:00:00.001 Started by upstream project "autotest-per-patch" build number 126208 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.090 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.094 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.133 Fetching changes from the remote Git repository 00:00:00.136 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.168 Using shallow fetch with depth 1 00:00:00.168 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.168 > git --version # timeout=10 00:00:00.192 > git --version # 'git version 2.39.2' 00:00:00.192 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.210 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.210 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.216 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.227 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.237 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.237 > git config core.sparsecheckout # timeout=10 00:00:04.247 > git read-tree -mu HEAD # timeout=10 00:00:04.263 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.280 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.280 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.385 [Pipeline] Start of Pipeline 00:00:04.400 [Pipeline] library 00:00:04.402 Loading library shm_lib@master 00:00:04.402 Library shm_lib@master is cached. Copying from home. 00:00:04.419 [Pipeline] node 00:00:04.427 Running on VM-host-WFP7 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.432 [Pipeline] { 00:00:04.443 [Pipeline] catchError 00:00:04.445 [Pipeline] { 00:00:04.460 [Pipeline] wrap 00:00:04.470 [Pipeline] { 00:00:04.478 [Pipeline] stage 00:00:04.480 [Pipeline] { (Prologue) 00:00:04.501 [Pipeline] echo 00:00:04.502 Node: VM-host-WFP7 00:00:04.508 [Pipeline] cleanWs 00:00:04.516 [WS-CLEANUP] Deleting project workspace... 00:00:04.516 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.522 [WS-CLEANUP] done 00:00:04.673 [Pipeline] setCustomBuildProperty 00:00:04.756 [Pipeline] httpRequest 00:00:04.771 [Pipeline] echo 00:00:04.773 Sorcerer 10.211.164.101 is alive 00:00:04.779 [Pipeline] httpRequest 00:00:04.783 HttpMethod: GET 00:00:04.783 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.784 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.790 Response Code: HTTP/1.1 200 OK 00:00:04.790 Success: Status code 200 is in the accepted range: 200,404 00:00:04.791 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.066 [Pipeline] sh 00:00:06.350 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.366 [Pipeline] httpRequest 00:00:06.384 [Pipeline] echo 00:00:06.385 Sorcerer 10.211.164.101 is alive 00:00:06.392 [Pipeline] httpRequest 00:00:06.395 HttpMethod: GET 00:00:06.396 URL: http://10.211.164.101/packages/spdk_33d82c0da54dac644ff15b9023e50a005979ecfb.tar.gz 00:00:06.397 Sending request to url: http://10.211.164.101/packages/spdk_33d82c0da54dac644ff15b9023e50a005979ecfb.tar.gz 00:00:06.400 Response Code: HTTP/1.1 200 OK 00:00:06.400 Success: Status code 200 is in the accepted range: 200,404 00:00:06.400 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_33d82c0da54dac644ff15b9023e50a005979ecfb.tar.gz 00:00:24.784 [Pipeline] sh 00:00:25.067 + tar --no-same-owner -xf spdk_33d82c0da54dac644ff15b9023e50a005979ecfb.tar.gz 00:00:27.612 [Pipeline] sh 00:00:27.894 + git -C spdk log --oneline -n5 00:00:27.894 33d82c0da test/bdev: Skip "hidden" nvme devices from the sysfs 00:00:27.894 719d03c6a sock/uring: only register net impl if supported 00:00:27.894 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:27.894 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:27.894 6c7c1f57e accel: add sequence outstanding stat 00:00:27.913 [Pipeline] writeFile 00:00:27.928 [Pipeline] sh 00:00:28.209 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:28.221 [Pipeline] sh 00:00:28.501 + cat autorun-spdk.conf 00:00:28.501 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.501 SPDK_TEST_NVME=1 00:00:28.501 SPDK_TEST_FTL=1 00:00:28.501 SPDK_TEST_ISAL=1 00:00:28.501 SPDK_RUN_ASAN=1 00:00:28.501 SPDK_RUN_UBSAN=1 00:00:28.501 SPDK_TEST_XNVME=1 00:00:28.501 SPDK_TEST_NVME_FDP=1 00:00:28.501 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:28.507 RUN_NIGHTLY=0 00:00:28.510 [Pipeline] } 00:00:28.528 [Pipeline] // stage 00:00:28.548 [Pipeline] stage 00:00:28.551 [Pipeline] { (Run VM) 00:00:28.565 [Pipeline] sh 00:00:28.850 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:28.850 + echo 'Start stage prepare_nvme.sh' 00:00:28.850 Start stage prepare_nvme.sh 00:00:28.850 + [[ -n 7 ]] 00:00:28.850 + disk_prefix=ex7 00:00:28.850 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:28.850 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:28.850 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:28.850 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.850 ++ SPDK_TEST_NVME=1 00:00:28.850 ++ SPDK_TEST_FTL=1 00:00:28.850 ++ SPDK_TEST_ISAL=1 00:00:28.850 ++ SPDK_RUN_ASAN=1 00:00:28.850 ++ SPDK_RUN_UBSAN=1 00:00:28.850 ++ SPDK_TEST_XNVME=1 00:00:28.850 ++ SPDK_TEST_NVME_FDP=1 00:00:28.850 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:28.850 ++ RUN_NIGHTLY=0 00:00:28.850 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:28.850 + nvme_files=() 00:00:28.850 + declare -A nvme_files 00:00:28.850 + backend_dir=/var/lib/libvirt/images/backends 00:00:28.850 + nvme_files['nvme.img']=5G 00:00:28.850 + nvme_files['nvme-cmb.img']=5G 00:00:28.850 + nvme_files['nvme-multi0.img']=4G 00:00:28.850 + nvme_files['nvme-multi1.img']=4G 00:00:28.850 + nvme_files['nvme-multi2.img']=4G 00:00:28.850 + nvme_files['nvme-openstack.img']=8G 00:00:28.850 + nvme_files['nvme-zns.img']=5G 00:00:28.850 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:28.850 + (( SPDK_TEST_FTL == 1 )) 00:00:28.850 + nvme_files["nvme-ftl.img"]=6G 00:00:28.850 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:28.850 + nvme_files["nvme-fdp.img"]=1G 00:00:28.850 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:28.850 + for nvme in "${!nvme_files[@]}" 00:00:28.850 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:28.850 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.850 + for nvme in "${!nvme_files[@]}" 00:00:28.850 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:00:28.850 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:28.850 + for nvme in "${!nvme_files[@]}" 00:00:28.850 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:28.850 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:28.850 + for nvme in "${!nvme_files[@]}" 00:00:28.850 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:28.850 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:28.850 + for nvme in "${!nvme_files[@]}" 00:00:28.850 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:28.850 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:28.850 + for nvme in "${!nvme_files[@]}" 00:00:28.850 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:28.850 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.850 + for nvme in "${!nvme_files[@]}" 00:00:28.850 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:29.112 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.112 + for nvme in "${!nvme_files[@]}" 00:00:29.112 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:00:29.112 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:29.112 + for nvme in "${!nvme_files[@]}" 00:00:29.113 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:29.113 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.113 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:29.113 + echo 'End stage prepare_nvme.sh' 00:00:29.113 End stage prepare_nvme.sh 00:00:29.124 [Pipeline] sh 00:00:29.415 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:29.415 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:00:29.415 00:00:29.415 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:29.415 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:29.415 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:29.415 HELP=0 00:00:29.415 DRY_RUN=0 00:00:29.415 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:00:29.415 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:29.415 NVME_AUTO_CREATE=0 00:00:29.415 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:00:29.415 NVME_CMB=,,,, 00:00:29.415 NVME_PMR=,,,, 00:00:29.415 NVME_ZNS=,,,, 00:00:29.415 NVME_MS=true,,,, 00:00:29.415 NVME_FDP=,,,on, 00:00:29.415 SPDK_VAGRANT_DISTRO=fedora38 00:00:29.415 SPDK_VAGRANT_VMCPU=10 00:00:29.415 SPDK_VAGRANT_VMRAM=12288 00:00:29.415 SPDK_VAGRANT_PROVIDER=libvirt 00:00:29.415 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:29.415 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:29.415 SPDK_OPENSTACK_NETWORK=0 00:00:29.415 VAGRANT_PACKAGE_BOX=0 00:00:29.415 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:29.415 FORCE_DISTRO=true 00:00:29.415 VAGRANT_BOX_VERSION= 00:00:29.415 EXTRA_VAGRANTFILES= 00:00:29.415 NIC_MODEL=virtio 00:00:29.415 00:00:29.415 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:00:29.415 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:31.956 Bringing machine 'default' up with 'libvirt' provider... 00:00:32.215 ==> default: Creating image (snapshot of base box volume). 00:00:32.475 ==> default: Creating domain with the following settings... 00:00:32.475 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721055370_6831497e4a185630d457 00:00:32.475 ==> default: -- Domain type: kvm 00:00:32.475 ==> default: -- Cpus: 10 00:00:32.475 ==> default: -- Feature: acpi 00:00:32.475 ==> default: -- Feature: apic 00:00:32.475 ==> default: -- Feature: pae 00:00:32.475 ==> default: -- Memory: 12288M 00:00:32.475 ==> default: -- Memory Backing: hugepages: 00:00:32.475 ==> default: -- Management MAC: 00:00:32.475 ==> default: -- Loader: 00:00:32.475 ==> default: -- Nvram: 00:00:32.475 ==> default: -- Base box: spdk/fedora38 00:00:32.475 ==> default: -- Storage pool: default 00:00:32.475 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721055370_6831497e4a185630d457.img (20G) 00:00:32.475 ==> default: -- Volume Cache: default 00:00:32.475 ==> default: -- Kernel: 00:00:32.475 ==> default: -- Initrd: 00:00:32.475 ==> default: -- Graphics Type: vnc 00:00:32.475 ==> default: -- Graphics Port: -1 00:00:32.475 ==> default: -- Graphics IP: 127.0.0.1 00:00:32.475 ==> default: -- Graphics Password: Not defined 00:00:32.475 ==> default: -- Video Type: cirrus 00:00:32.475 ==> default: -- Video VRAM: 9216 00:00:32.475 ==> default: -- Sound Type: 00:00:32.475 ==> default: -- Keymap: en-us 00:00:32.475 ==> default: -- TPM Path: 00:00:32.475 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:32.475 ==> default: -- Command line args: 00:00:32.475 ==> default: -> value=-device, 00:00:32.475 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:32.475 ==> default: -> value=-drive, 00:00:32.475 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:32.475 ==> default: -> value=-device, 00:00:32.475 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:32.475 ==> default: -> value=-device, 00:00:32.475 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:32.475 ==> default: -> value=-drive, 00:00:32.475 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:00:32.475 ==> default: -> value=-device, 00:00:32.475 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.475 ==> default: -> value=-device, 00:00:32.475 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:32.475 ==> default: -> value=-drive, 00:00:32.475 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:32.475 ==> default: -> value=-device, 00:00:32.475 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.475 ==> default: -> value=-drive, 00:00:32.475 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:32.475 ==> default: -> value=-device, 00:00:32.475 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.475 ==> default: -> value=-drive, 00:00:32.475 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:32.475 ==> default: -> value=-device, 00:00:32.475 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.475 ==> default: -> value=-device, 00:00:32.475 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:32.475 ==> default: -> value=-device, 00:00:32.476 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:32.476 ==> default: -> value=-drive, 00:00:32.476 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:32.476 ==> default: -> value=-device, 00:00:32.476 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.476 ==> default: Creating shared folders metadata... 00:00:32.476 ==> default: Starting domain. 00:00:33.851 ==> default: Waiting for domain to get an IP address... 00:00:51.935 ==> default: Waiting for SSH to become available... 00:00:51.935 ==> default: Configuring and enabling network interfaces... 00:00:56.130 default: SSH address: 192.168.121.147:22 00:00:56.130 default: SSH username: vagrant 00:00:56.130 default: SSH auth method: private key 00:00:59.430 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:07.543 ==> default: Mounting SSHFS shared folder... 00:01:08.919 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:08.919 ==> default: Checking Mount.. 00:01:10.293 ==> default: Folder Successfully Mounted! 00:01:10.293 ==> default: Running provisioner: file... 00:01:11.233 default: ~/.gitconfig => .gitconfig 00:01:11.798 00:01:11.798 SUCCESS! 00:01:11.798 00:01:11.798 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:11.798 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:11.798 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:11.798 00:01:11.806 [Pipeline] } 00:01:11.824 [Pipeline] // stage 00:01:11.834 [Pipeline] dir 00:01:11.834 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:01:11.836 [Pipeline] { 00:01:11.849 [Pipeline] catchError 00:01:11.851 [Pipeline] { 00:01:11.863 [Pipeline] sh 00:01:12.144 + vagrant ssh-config --host vagrant 00:01:12.144 + sed -ne /^Host/,$p 00:01:12.144 + tee ssh_conf 00:01:14.671 Host vagrant 00:01:14.671 HostName 192.168.121.147 00:01:14.671 User vagrant 00:01:14.671 Port 22 00:01:14.671 UserKnownHostsFile /dev/null 00:01:14.671 StrictHostKeyChecking no 00:01:14.671 PasswordAuthentication no 00:01:14.671 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:14.671 IdentitiesOnly yes 00:01:14.671 LogLevel FATAL 00:01:14.671 ForwardAgent yes 00:01:14.671 ForwardX11 yes 00:01:14.671 00:01:14.684 [Pipeline] withEnv 00:01:14.685 [Pipeline] { 00:01:14.701 [Pipeline] sh 00:01:14.979 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:14.979 source /etc/os-release 00:01:14.979 [[ -e /image.version ]] && img=$(< /image.version) 00:01:14.979 # Minimal, systemd-like check. 00:01:14.979 if [[ -e /.dockerenv ]]; then 00:01:14.979 # Clear garbage from the node's name: 00:01:14.979 # agt-er_autotest_547-896 -> autotest_547-896 00:01:14.979 # $HOSTNAME is the actual container id 00:01:14.979 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:14.979 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:14.979 # We can assume this is a mount from a host where container is running, 00:01:14.979 # so fetch its hostname to easily identify the target swarm worker. 00:01:14.979 container="$(< /etc/hostname) ($agent)" 00:01:14.979 else 00:01:14.979 # Fallback 00:01:14.979 container=$agent 00:01:14.979 fi 00:01:14.979 fi 00:01:14.979 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:14.979 00:01:15.248 [Pipeline] } 00:01:15.273 [Pipeline] // withEnv 00:01:15.296 [Pipeline] setCustomBuildProperty 00:01:15.318 [Pipeline] stage 00:01:15.320 [Pipeline] { (Tests) 00:01:15.331 [Pipeline] sh 00:01:15.603 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:15.873 [Pipeline] sh 00:01:16.149 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:16.431 [Pipeline] timeout 00:01:16.431 Timeout set to expire in 40 min 00:01:16.433 [Pipeline] { 00:01:16.482 [Pipeline] sh 00:01:16.762 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:17.329 HEAD is now at 33d82c0da test/bdev: Skip "hidden" nvme devices from the sysfs 00:01:17.342 [Pipeline] sh 00:01:17.623 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:17.896 [Pipeline] sh 00:01:18.178 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:18.453 [Pipeline] sh 00:01:18.735 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:18.998 ++ readlink -f spdk_repo 00:01:18.998 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:18.998 + [[ -n /home/vagrant/spdk_repo ]] 00:01:18.998 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:18.998 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:18.998 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:18.998 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:18.998 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:18.998 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:18.998 + cd /home/vagrant/spdk_repo 00:01:18.998 + source /etc/os-release 00:01:18.998 ++ NAME='Fedora Linux' 00:01:18.998 ++ VERSION='38 (Cloud Edition)' 00:01:18.998 ++ ID=fedora 00:01:18.998 ++ VERSION_ID=38 00:01:18.998 ++ VERSION_CODENAME= 00:01:18.998 ++ PLATFORM_ID=platform:f38 00:01:18.998 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:18.998 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:18.998 ++ LOGO=fedora-logo-icon 00:01:18.998 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:18.998 ++ HOME_URL=https://fedoraproject.org/ 00:01:18.998 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:18.998 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:18.998 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:18.998 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:18.998 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:18.998 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:18.998 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:18.998 ++ SUPPORT_END=2024-05-14 00:01:18.998 ++ VARIANT='Cloud Edition' 00:01:18.998 ++ VARIANT_ID=cloud 00:01:18.998 + uname -a 00:01:18.998 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:18.998 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:19.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:19.824 Hugepages 00:01:19.824 node hugesize free / total 00:01:19.824 node0 1048576kB 0 / 0 00:01:19.824 node0 2048kB 0 / 0 00:01:19.824 00:01:19.824 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.824 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:19.824 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:19.824 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:19.824 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:19.824 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:19.824 + rm -f /tmp/spdk-ld-path 00:01:19.824 + source autorun-spdk.conf 00:01:19.824 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.824 ++ SPDK_TEST_NVME=1 00:01:19.824 ++ SPDK_TEST_FTL=1 00:01:19.824 ++ SPDK_TEST_ISAL=1 00:01:19.824 ++ SPDK_RUN_ASAN=1 00:01:19.824 ++ SPDK_RUN_UBSAN=1 00:01:19.824 ++ SPDK_TEST_XNVME=1 00:01:19.824 ++ SPDK_TEST_NVME_FDP=1 00:01:19.824 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.824 ++ RUN_NIGHTLY=0 00:01:19.824 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:19.824 + [[ -n '' ]] 00:01:19.824 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:19.824 + for M in /var/spdk/build-*-manifest.txt 00:01:19.824 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:19.824 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.824 + for M in /var/spdk/build-*-manifest.txt 00:01:19.824 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:19.824 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.824 ++ uname 00:01:19.824 + [[ Linux == \L\i\n\u\x ]] 00:01:19.824 + sudo dmesg -T 00:01:19.824 + sudo dmesg --clear 00:01:19.824 + dmesg_pid=5361 00:01:19.824 + [[ Fedora Linux == FreeBSD ]] 00:01:19.824 + sudo dmesg -Tw 00:01:19.824 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.824 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.824 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:19.824 + [[ -x /usr/src/fio-static/fio ]] 00:01:19.824 + export FIO_BIN=/usr/src/fio-static/fio 00:01:19.824 + FIO_BIN=/usr/src/fio-static/fio 00:01:19.824 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:19.824 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:19.824 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:19.824 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.824 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.824 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:19.824 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.824 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.824 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:20.083 Test configuration: 00:01:20.083 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.083 SPDK_TEST_NVME=1 00:01:20.083 SPDK_TEST_FTL=1 00:01:20.083 SPDK_TEST_ISAL=1 00:01:20.083 SPDK_RUN_ASAN=1 00:01:20.083 SPDK_RUN_UBSAN=1 00:01:20.083 SPDK_TEST_XNVME=1 00:01:20.083 SPDK_TEST_NVME_FDP=1 00:01:20.083 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.083 RUN_NIGHTLY=0 14:56:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:20.083 14:56:58 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:20.083 14:56:58 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:20.083 14:56:58 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:20.083 14:56:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.083 14:56:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.083 14:56:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.083 14:56:58 -- paths/export.sh@5 -- $ export PATH 00:01:20.083 14:56:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.083 14:56:58 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:20.083 14:56:58 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:20.083 14:56:58 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721055418.XXXXXX 00:01:20.083 14:56:58 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721055418.esYUj7 00:01:20.083 14:56:58 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:20.083 14:56:58 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:20.083 14:56:58 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:20.083 14:56:58 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:20.083 14:56:58 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:20.083 14:56:58 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:20.083 14:56:58 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:20.083 14:56:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.083 14:56:58 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:20.083 14:56:58 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:20.083 14:56:58 -- pm/common@17 -- $ local monitor 00:01:20.083 14:56:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.083 14:56:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.083 14:56:58 -- pm/common@25 -- $ sleep 1 00:01:20.083 14:56:58 -- pm/common@21 -- $ date +%s 00:01:20.083 14:56:58 -- pm/common@21 -- $ date +%s 00:01:20.083 14:56:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721055418 00:01:20.083 14:56:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721055418 00:01:20.083 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721055418_collect-vmstat.pm.log 00:01:20.083 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721055418_collect-cpu-load.pm.log 00:01:21.020 14:56:59 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:21.020 14:56:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:21.020 14:56:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:21.020 14:56:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:21.020 14:56:59 -- spdk/autobuild.sh@16 -- $ date -u 00:01:21.020 Mon Jul 15 02:56:59 PM UTC 2024 00:01:21.020 14:56:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:21.020 v24.09-pre-203-g33d82c0da 00:01:21.020 14:56:59 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:21.020 14:56:59 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:21.020 14:56:59 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:21.020 14:56:59 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:21.020 14:56:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.020 ************************************ 00:01:21.020 START TEST asan 00:01:21.020 ************************************ 00:01:21.020 using asan 00:01:21.020 14:56:59 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:21.020 00:01:21.020 real 0m0.000s 00:01:21.020 user 0m0.000s 00:01:21.020 sys 0m0.000s 00:01:21.020 14:56:59 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:21.020 14:56:59 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:21.020 ************************************ 00:01:21.020 END TEST asan 00:01:21.020 ************************************ 00:01:21.279 14:56:59 -- common/autotest_common.sh@1142 -- $ return 0 00:01:21.279 14:56:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:21.279 14:56:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:21.279 14:56:59 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:21.279 14:56:59 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:21.279 14:56:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.279 ************************************ 00:01:21.279 START TEST ubsan 00:01:21.279 ************************************ 00:01:21.279 using ubsan 00:01:21.279 14:56:59 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:21.279 00:01:21.279 real 0m0.000s 00:01:21.279 user 0m0.000s 00:01:21.279 sys 0m0.000s 00:01:21.279 14:56:59 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:21.279 14:56:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:21.279 ************************************ 00:01:21.279 END TEST ubsan 00:01:21.279 ************************************ 00:01:21.279 14:56:59 -- common/autotest_common.sh@1142 -- $ return 0 00:01:21.279 14:56:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:21.279 14:56:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:21.279 14:56:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:21.279 14:56:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:21.279 14:56:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.279 14:56:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.279 14:56:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.279 14:56:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:21.279 14:56:59 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:21.279 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:21.279 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:21.847 Using 'verbs' RDMA provider 00:01:37.690 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:52.564 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:52.564 Creating mk/config.mk...done. 00:01:52.564 Creating mk/cc.flags.mk...done. 00:01:52.564 Type 'make' to build. 00:01:52.564 14:57:30 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:52.564 14:57:30 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:52.564 14:57:30 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:52.564 14:57:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.564 ************************************ 00:01:52.564 START TEST make 00:01:52.564 ************************************ 00:01:52.564 14:57:30 make -- common/autotest_common.sh@1123 -- $ make -j10 00:01:52.822 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:52.822 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:52.822 meson setup builddir \ 00:01:52.822 -Dwith-libaio=enabled \ 00:01:52.822 -Dwith-liburing=enabled \ 00:01:52.822 -Dwith-libvfn=disabled \ 00:01:52.822 -Dwith-spdk=false && \ 00:01:52.822 meson compile -C builddir && \ 00:01:52.822 cd -) 00:01:52.822 make[1]: Nothing to be done for 'all'. 00:01:55.406 The Meson build system 00:01:55.406 Version: 1.3.1 00:01:55.406 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:55.406 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:55.406 Build type: native build 00:01:55.406 Project name: xnvme 00:01:55.406 Project version: 0.7.3 00:01:55.406 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:55.406 C linker for the host machine: cc ld.bfd 2.39-16 00:01:55.406 Host machine cpu family: x86_64 00:01:55.406 Host machine cpu: x86_64 00:01:55.406 Message: host_machine.system: linux 00:01:55.406 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:55.406 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:55.406 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:55.406 Run-time dependency threads found: YES 00:01:55.406 Has header "setupapi.h" : NO 00:01:55.406 Has header "linux/blkzoned.h" : YES 00:01:55.406 Has header "linux/blkzoned.h" : YES (cached) 00:01:55.406 Has header "libaio.h" : YES 00:01:55.406 Library aio found: YES 00:01:55.406 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:55.406 Run-time dependency liburing found: YES 2.2 00:01:55.406 Dependency libvfn skipped: feature with-libvfn disabled 00:01:55.406 Run-time dependency appleframeworks found: NO (tried framework) 00:01:55.406 Run-time dependency appleframeworks found: NO (tried framework) 00:01:55.406 Configuring xnvme_config.h using configuration 00:01:55.406 Configuring xnvme.spec using configuration 00:01:55.406 Run-time dependency bash-completion found: YES 2.11 00:01:55.406 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:55.406 Program cp found: YES (/usr/bin/cp) 00:01:55.406 Has header "winsock2.h" : NO 00:01:55.406 Has header "dbghelp.h" : NO 00:01:55.406 Library rpcrt4 found: NO 00:01:55.406 Library rt found: YES 00:01:55.406 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:55.406 Found CMake: /usr/bin/cmake (3.27.7) 00:01:55.406 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:01:55.406 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:01:55.406 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:01:55.406 Build targets in project: 32 00:01:55.406 00:01:55.406 xnvme 0.7.3 00:01:55.406 00:01:55.406 User defined options 00:01:55.406 with-libaio : enabled 00:01:55.406 with-liburing: enabled 00:01:55.406 with-libvfn : disabled 00:01:55.406 with-spdk : false 00:01:55.406 00:01:55.406 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:55.406 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:01:55.406 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:01:55.406 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:01:55.406 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:01:55.406 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:01:55.406 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:01:55.406 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:01:55.406 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:01:55.406 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:01:55.664 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:01:55.664 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:01:55.664 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:01:55.664 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:01:55.664 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:01:55.664 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:01:55.664 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:01:55.664 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:01:55.664 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:01:55.664 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:01:55.664 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:01:55.664 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:01:55.664 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:01:55.664 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:01:55.664 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:01:55.664 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:01:55.664 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:01:55.664 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:01:55.664 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:01:55.664 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:01:55.923 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:01:55.923 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:01:55.923 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:01:55.923 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:01:55.923 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:01:55.923 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:01:55.923 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:01:55.923 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:01:55.923 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:01:55.923 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:01:55.923 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:01:55.923 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:01:55.923 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:01:55.923 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:01:55.923 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:01:55.923 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:01:55.923 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:01:55.923 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:01:55.923 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:01:55.923 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:01:55.923 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:01:55.923 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:01:55.923 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:01:55.923 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:01:55.923 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:01:55.923 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:01:55.923 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:01:55.923 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:01:55.923 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:01:55.923 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:01:55.923 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:01:56.182 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:01:56.182 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:01:56.182 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:01:56.182 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:01:56.182 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:01:56.182 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:01:56.182 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:01:56.182 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:01:56.182 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:01:56.182 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:01:56.182 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:01:56.182 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:01:56.182 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:01:56.182 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:01:56.182 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:01:56.182 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:01:56.182 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:01:56.441 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:01:56.441 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:01:56.441 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:01:56.441 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:01:56.441 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:01:56.441 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:01:56.441 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:01:56.441 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:01:56.441 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:01:56.441 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:01:56.441 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:01:56.441 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:01:56.441 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:01:56.441 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:01:56.441 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:01:56.441 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:01:56.441 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:01:56.441 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:01:56.441 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:01:56.700 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:01:56.700 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:01:56.700 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:01:56.700 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:01:56.700 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:01:56.700 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:01:56.700 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:01:56.700 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:01:56.700 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:01:56.700 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:01:56.700 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:01:56.700 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:01:56.700 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:01:56.700 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:01:56.700 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:01:56.700 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:01:56.700 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:01:56.700 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:01:56.700 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:01:56.700 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:01:56.700 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:01:56.700 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:01:56.700 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:01:56.700 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:01:56.700 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:01:56.700 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:01:56.700 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:01:56.700 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:01:56.959 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:01:56.959 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:01:56.959 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:01:56.959 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:01:56.959 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:01:56.959 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:01:56.959 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:01:56.959 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:01:56.959 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:01:56.959 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:01:56.959 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:01:56.959 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:01:56.959 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:01:56.959 [137/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:01:56.959 [138/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:01:56.959 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:01:56.959 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:01:56.959 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:01:56.959 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:01:56.959 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:01:57.218 [144/203] Linking target lib/libxnvme.so 00:01:57.218 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:01:57.218 [146/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:01:57.218 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:01:57.218 [148/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:01:57.218 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:01:57.218 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:01:57.218 [151/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:01:57.218 [152/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:01:57.218 [153/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:01:57.218 [154/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:01:57.218 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:01:57.477 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:01:57.477 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:01:57.477 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:01:57.477 [159/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:01:57.477 [160/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:01:57.477 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:01:57.477 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:01:57.477 [163/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:01:57.477 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:01:57.477 [165/203] Compiling C object tools/zoned.p/zoned.c.o 00:01:57.477 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:01:57.477 [167/203] Compiling C object tools/lblk.p/lblk.c.o 00:01:57.477 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:01:57.477 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:01:57.477 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:01:57.736 [171/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:01:57.736 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:01:57.736 [173/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:01:57.736 [174/203] Linking static target lib/libxnvme.a 00:01:57.736 [175/203] Linking target tests/xnvme_tests_async_intf 00:01:57.736 [176/203] Linking target tests/xnvme_tests_enum 00:01:57.736 [177/203] Linking target tests/xnvme_tests_buf 00:01:57.736 [178/203] Linking target tests/xnvme_tests_cli 00:01:57.736 [179/203] Linking target tests/xnvme_tests_scc 00:01:57.736 [180/203] Linking target tests/xnvme_tests_ioworker 00:01:57.736 [181/203] Linking target tests/xnvme_tests_xnvme_cli 00:01:57.736 [182/203] Linking target tests/xnvme_tests_xnvme_file 00:01:57.736 [183/203] Linking target tests/xnvme_tests_lblk 00:01:57.736 [184/203] Linking target tests/xnvme_tests_znd_zrwa 00:01:57.736 [185/203] Linking target tests/xnvme_tests_znd_append 00:01:57.736 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:01:57.736 [187/203] Linking target tests/xnvme_tests_znd_state 00:01:57.736 [188/203] Linking target tests/xnvme_tests_kvs 00:01:57.736 [189/203] Linking target tools/xdd 00:01:57.736 [190/203] Linking target examples/xnvme_enum 00:01:57.736 [191/203] Linking target tools/xnvme_file 00:01:57.736 [192/203] Linking target tests/xnvme_tests_map 00:01:57.736 [193/203] Linking target examples/xnvme_hello 00:01:57.736 [194/203] Linking target examples/xnvme_dev 00:01:57.736 [195/203] Linking target tools/lblk 00:01:57.736 [196/203] Linking target tools/zoned 00:01:57.736 [197/203] Linking target tools/xnvme 00:01:57.736 [198/203] Linking target tools/kvs 00:01:57.736 [199/203] Linking target examples/xnvme_single_async 00:01:57.995 [200/203] Linking target examples/xnvme_single_sync 00:01:57.995 [201/203] Linking target examples/xnvme_io_async 00:01:57.995 [202/203] Linking target examples/zoned_io_async 00:01:57.995 [203/203] Linking target examples/zoned_io_sync 00:01:57.995 INFO: autodetecting backend as ninja 00:01:57.995 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:57.995 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:03.273 The Meson build system 00:02:03.273 Version: 1.3.1 00:02:03.273 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:03.273 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:03.273 Build type: native build 00:02:03.273 Program cat found: YES (/usr/bin/cat) 00:02:03.273 Project name: DPDK 00:02:03.273 Project version: 24.03.0 00:02:03.273 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:03.273 C linker for the host machine: cc ld.bfd 2.39-16 00:02:03.273 Host machine cpu family: x86_64 00:02:03.273 Host machine cpu: x86_64 00:02:03.273 Message: ## Building in Developer Mode ## 00:02:03.273 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:03.273 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:03.273 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:03.273 Program python3 found: YES (/usr/bin/python3) 00:02:03.273 Program cat found: YES (/usr/bin/cat) 00:02:03.273 Compiler for C supports arguments -march=native: YES 00:02:03.273 Checking for size of "void *" : 8 00:02:03.273 Checking for size of "void *" : 8 (cached) 00:02:03.273 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:03.273 Library m found: YES 00:02:03.273 Library numa found: YES 00:02:03.273 Has header "numaif.h" : YES 00:02:03.273 Library fdt found: NO 00:02:03.273 Library execinfo found: NO 00:02:03.273 Has header "execinfo.h" : YES 00:02:03.273 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:03.273 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:03.273 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:03.273 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:03.273 Run-time dependency openssl found: YES 3.0.9 00:02:03.273 Run-time dependency libpcap found: YES 1.10.4 00:02:03.273 Has header "pcap.h" with dependency libpcap: YES 00:02:03.273 Compiler for C supports arguments -Wcast-qual: YES 00:02:03.273 Compiler for C supports arguments -Wdeprecated: YES 00:02:03.273 Compiler for C supports arguments -Wformat: YES 00:02:03.273 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:03.273 Compiler for C supports arguments -Wformat-security: NO 00:02:03.273 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:03.273 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:03.273 Compiler for C supports arguments -Wnested-externs: YES 00:02:03.273 Compiler for C supports arguments -Wold-style-definition: YES 00:02:03.273 Compiler for C supports arguments -Wpointer-arith: YES 00:02:03.273 Compiler for C supports arguments -Wsign-compare: YES 00:02:03.273 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:03.273 Compiler for C supports arguments -Wundef: YES 00:02:03.273 Compiler for C supports arguments -Wwrite-strings: YES 00:02:03.273 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:03.273 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:03.273 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:03.273 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:03.273 Program objdump found: YES (/usr/bin/objdump) 00:02:03.273 Compiler for C supports arguments -mavx512f: YES 00:02:03.273 Checking if "AVX512 checking" compiles: YES 00:02:03.273 Fetching value of define "__SSE4_2__" : 1 00:02:03.273 Fetching value of define "__AES__" : 1 00:02:03.273 Fetching value of define "__AVX__" : 1 00:02:03.273 Fetching value of define "__AVX2__" : 1 00:02:03.273 Fetching value of define "__AVX512BW__" : 1 00:02:03.273 Fetching value of define "__AVX512CD__" : 1 00:02:03.273 Fetching value of define "__AVX512DQ__" : 1 00:02:03.273 Fetching value of define "__AVX512F__" : 1 00:02:03.273 Fetching value of define "__AVX512VL__" : 1 00:02:03.273 Fetching value of define "__PCLMUL__" : 1 00:02:03.273 Fetching value of define "__RDRND__" : 1 00:02:03.273 Fetching value of define "__RDSEED__" : 1 00:02:03.273 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:03.273 Fetching value of define "__znver1__" : (undefined) 00:02:03.273 Fetching value of define "__znver2__" : (undefined) 00:02:03.273 Fetching value of define "__znver3__" : (undefined) 00:02:03.273 Fetching value of define "__znver4__" : (undefined) 00:02:03.273 Library asan found: YES 00:02:03.273 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:03.273 Message: lib/log: Defining dependency "log" 00:02:03.273 Message: lib/kvargs: Defining dependency "kvargs" 00:02:03.273 Message: lib/telemetry: Defining dependency "telemetry" 00:02:03.273 Library rt found: YES 00:02:03.273 Checking for function "getentropy" : NO 00:02:03.273 Message: lib/eal: Defining dependency "eal" 00:02:03.273 Message: lib/ring: Defining dependency "ring" 00:02:03.273 Message: lib/rcu: Defining dependency "rcu" 00:02:03.273 Message: lib/mempool: Defining dependency "mempool" 00:02:03.273 Message: lib/mbuf: Defining dependency "mbuf" 00:02:03.273 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:03.273 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:03.273 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:03.273 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:03.273 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:03.273 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:03.273 Compiler for C supports arguments -mpclmul: YES 00:02:03.274 Compiler for C supports arguments -maes: YES 00:02:03.274 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:03.274 Compiler for C supports arguments -mavx512bw: YES 00:02:03.274 Compiler for C supports arguments -mavx512dq: YES 00:02:03.274 Compiler for C supports arguments -mavx512vl: YES 00:02:03.274 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:03.274 Compiler for C supports arguments -mavx2: YES 00:02:03.274 Compiler for C supports arguments -mavx: YES 00:02:03.274 Message: lib/net: Defining dependency "net" 00:02:03.274 Message: lib/meter: Defining dependency "meter" 00:02:03.274 Message: lib/ethdev: Defining dependency "ethdev" 00:02:03.274 Message: lib/pci: Defining dependency "pci" 00:02:03.274 Message: lib/cmdline: Defining dependency "cmdline" 00:02:03.274 Message: lib/hash: Defining dependency "hash" 00:02:03.274 Message: lib/timer: Defining dependency "timer" 00:02:03.274 Message: lib/compressdev: Defining dependency "compressdev" 00:02:03.274 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:03.274 Message: lib/dmadev: Defining dependency "dmadev" 00:02:03.274 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:03.274 Message: lib/power: Defining dependency "power" 00:02:03.274 Message: lib/reorder: Defining dependency "reorder" 00:02:03.274 Message: lib/security: Defining dependency "security" 00:02:03.274 Has header "linux/userfaultfd.h" : YES 00:02:03.274 Has header "linux/vduse.h" : YES 00:02:03.274 Message: lib/vhost: Defining dependency "vhost" 00:02:03.274 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:03.274 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:03.274 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:03.274 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:03.274 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:03.274 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:03.274 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:03.274 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:03.274 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:03.274 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:03.274 Program doxygen found: YES (/usr/bin/doxygen) 00:02:03.274 Configuring doxy-api-html.conf using configuration 00:02:03.274 Configuring doxy-api-man.conf using configuration 00:02:03.274 Program mandb found: YES (/usr/bin/mandb) 00:02:03.274 Program sphinx-build found: NO 00:02:03.274 Configuring rte_build_config.h using configuration 00:02:03.274 Message: 00:02:03.274 ================= 00:02:03.274 Applications Enabled 00:02:03.274 ================= 00:02:03.274 00:02:03.274 apps: 00:02:03.274 00:02:03.274 00:02:03.274 Message: 00:02:03.274 ================= 00:02:03.274 Libraries Enabled 00:02:03.274 ================= 00:02:03.274 00:02:03.274 libs: 00:02:03.274 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:03.274 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:03.274 cryptodev, dmadev, power, reorder, security, vhost, 00:02:03.274 00:02:03.274 Message: 00:02:03.274 =============== 00:02:03.274 Drivers Enabled 00:02:03.274 =============== 00:02:03.274 00:02:03.274 common: 00:02:03.274 00:02:03.274 bus: 00:02:03.274 pci, vdev, 00:02:03.274 mempool: 00:02:03.274 ring, 00:02:03.274 dma: 00:02:03.274 00:02:03.274 net: 00:02:03.274 00:02:03.274 crypto: 00:02:03.274 00:02:03.274 compress: 00:02:03.274 00:02:03.274 vdpa: 00:02:03.274 00:02:03.274 00:02:03.274 Message: 00:02:03.274 ================= 00:02:03.274 Content Skipped 00:02:03.274 ================= 00:02:03.274 00:02:03.274 apps: 00:02:03.274 dumpcap: explicitly disabled via build config 00:02:03.274 graph: explicitly disabled via build config 00:02:03.274 pdump: explicitly disabled via build config 00:02:03.274 proc-info: explicitly disabled via build config 00:02:03.274 test-acl: explicitly disabled via build config 00:02:03.274 test-bbdev: explicitly disabled via build config 00:02:03.274 test-cmdline: explicitly disabled via build config 00:02:03.274 test-compress-perf: explicitly disabled via build config 00:02:03.274 test-crypto-perf: explicitly disabled via build config 00:02:03.274 test-dma-perf: explicitly disabled via build config 00:02:03.274 test-eventdev: explicitly disabled via build config 00:02:03.274 test-fib: explicitly disabled via build config 00:02:03.274 test-flow-perf: explicitly disabled via build config 00:02:03.274 test-gpudev: explicitly disabled via build config 00:02:03.274 test-mldev: explicitly disabled via build config 00:02:03.274 test-pipeline: explicitly disabled via build config 00:02:03.274 test-pmd: explicitly disabled via build config 00:02:03.274 test-regex: explicitly disabled via build config 00:02:03.274 test-sad: explicitly disabled via build config 00:02:03.274 test-security-perf: explicitly disabled via build config 00:02:03.274 00:02:03.274 libs: 00:02:03.274 argparse: explicitly disabled via build config 00:02:03.274 metrics: explicitly disabled via build config 00:02:03.274 acl: explicitly disabled via build config 00:02:03.274 bbdev: explicitly disabled via build config 00:02:03.274 bitratestats: explicitly disabled via build config 00:02:03.274 bpf: explicitly disabled via build config 00:02:03.274 cfgfile: explicitly disabled via build config 00:02:03.274 distributor: explicitly disabled via build config 00:02:03.274 efd: explicitly disabled via build config 00:02:03.274 eventdev: explicitly disabled via build config 00:02:03.274 dispatcher: explicitly disabled via build config 00:02:03.274 gpudev: explicitly disabled via build config 00:02:03.274 gro: explicitly disabled via build config 00:02:03.274 gso: explicitly disabled via build config 00:02:03.274 ip_frag: explicitly disabled via build config 00:02:03.274 jobstats: explicitly disabled via build config 00:02:03.274 latencystats: explicitly disabled via build config 00:02:03.274 lpm: explicitly disabled via build config 00:02:03.274 member: explicitly disabled via build config 00:02:03.274 pcapng: explicitly disabled via build config 00:02:03.274 rawdev: explicitly disabled via build config 00:02:03.274 regexdev: explicitly disabled via build config 00:02:03.274 mldev: explicitly disabled via build config 00:02:03.274 rib: explicitly disabled via build config 00:02:03.274 sched: explicitly disabled via build config 00:02:03.274 stack: explicitly disabled via build config 00:02:03.274 ipsec: explicitly disabled via build config 00:02:03.274 pdcp: explicitly disabled via build config 00:02:03.274 fib: explicitly disabled via build config 00:02:03.274 port: explicitly disabled via build config 00:02:03.274 pdump: explicitly disabled via build config 00:02:03.274 table: explicitly disabled via build config 00:02:03.274 pipeline: explicitly disabled via build config 00:02:03.274 graph: explicitly disabled via build config 00:02:03.274 node: explicitly disabled via build config 00:02:03.274 00:02:03.274 drivers: 00:02:03.274 common/cpt: not in enabled drivers build config 00:02:03.274 common/dpaax: not in enabled drivers build config 00:02:03.274 common/iavf: not in enabled drivers build config 00:02:03.274 common/idpf: not in enabled drivers build config 00:02:03.274 common/ionic: not in enabled drivers build config 00:02:03.274 common/mvep: not in enabled drivers build config 00:02:03.274 common/octeontx: not in enabled drivers build config 00:02:03.274 bus/auxiliary: not in enabled drivers build config 00:02:03.274 bus/cdx: not in enabled drivers build config 00:02:03.274 bus/dpaa: not in enabled drivers build config 00:02:03.274 bus/fslmc: not in enabled drivers build config 00:02:03.274 bus/ifpga: not in enabled drivers build config 00:02:03.274 bus/platform: not in enabled drivers build config 00:02:03.274 bus/uacce: not in enabled drivers build config 00:02:03.274 bus/vmbus: not in enabled drivers build config 00:02:03.274 common/cnxk: not in enabled drivers build config 00:02:03.274 common/mlx5: not in enabled drivers build config 00:02:03.274 common/nfp: not in enabled drivers build config 00:02:03.274 common/nitrox: not in enabled drivers build config 00:02:03.274 common/qat: not in enabled drivers build config 00:02:03.274 common/sfc_efx: not in enabled drivers build config 00:02:03.274 mempool/bucket: not in enabled drivers build config 00:02:03.274 mempool/cnxk: not in enabled drivers build config 00:02:03.274 mempool/dpaa: not in enabled drivers build config 00:02:03.274 mempool/dpaa2: not in enabled drivers build config 00:02:03.274 mempool/octeontx: not in enabled drivers build config 00:02:03.274 mempool/stack: not in enabled drivers build config 00:02:03.274 dma/cnxk: not in enabled drivers build config 00:02:03.274 dma/dpaa: not in enabled drivers build config 00:02:03.274 dma/dpaa2: not in enabled drivers build config 00:02:03.274 dma/hisilicon: not in enabled drivers build config 00:02:03.274 dma/idxd: not in enabled drivers build config 00:02:03.274 dma/ioat: not in enabled drivers build config 00:02:03.274 dma/skeleton: not in enabled drivers build config 00:02:03.274 net/af_packet: not in enabled drivers build config 00:02:03.274 net/af_xdp: not in enabled drivers build config 00:02:03.274 net/ark: not in enabled drivers build config 00:02:03.274 net/atlantic: not in enabled drivers build config 00:02:03.274 net/avp: not in enabled drivers build config 00:02:03.274 net/axgbe: not in enabled drivers build config 00:02:03.274 net/bnx2x: not in enabled drivers build config 00:02:03.274 net/bnxt: not in enabled drivers build config 00:02:03.274 net/bonding: not in enabled drivers build config 00:02:03.274 net/cnxk: not in enabled drivers build config 00:02:03.274 net/cpfl: not in enabled drivers build config 00:02:03.274 net/cxgbe: not in enabled drivers build config 00:02:03.274 net/dpaa: not in enabled drivers build config 00:02:03.274 net/dpaa2: not in enabled drivers build config 00:02:03.274 net/e1000: not in enabled drivers build config 00:02:03.274 net/ena: not in enabled drivers build config 00:02:03.274 net/enetc: not in enabled drivers build config 00:02:03.274 net/enetfec: not in enabled drivers build config 00:02:03.274 net/enic: not in enabled drivers build config 00:02:03.274 net/failsafe: not in enabled drivers build config 00:02:03.275 net/fm10k: not in enabled drivers build config 00:02:03.275 net/gve: not in enabled drivers build config 00:02:03.275 net/hinic: not in enabled drivers build config 00:02:03.275 net/hns3: not in enabled drivers build config 00:02:03.275 net/i40e: not in enabled drivers build config 00:02:03.275 net/iavf: not in enabled drivers build config 00:02:03.275 net/ice: not in enabled drivers build config 00:02:03.275 net/idpf: not in enabled drivers build config 00:02:03.275 net/igc: not in enabled drivers build config 00:02:03.275 net/ionic: not in enabled drivers build config 00:02:03.275 net/ipn3ke: not in enabled drivers build config 00:02:03.275 net/ixgbe: not in enabled drivers build config 00:02:03.275 net/mana: not in enabled drivers build config 00:02:03.275 net/memif: not in enabled drivers build config 00:02:03.275 net/mlx4: not in enabled drivers build config 00:02:03.275 net/mlx5: not in enabled drivers build config 00:02:03.275 net/mvneta: not in enabled drivers build config 00:02:03.275 net/mvpp2: not in enabled drivers build config 00:02:03.275 net/netvsc: not in enabled drivers build config 00:02:03.275 net/nfb: not in enabled drivers build config 00:02:03.275 net/nfp: not in enabled drivers build config 00:02:03.275 net/ngbe: not in enabled drivers build config 00:02:03.275 net/null: not in enabled drivers build config 00:02:03.275 net/octeontx: not in enabled drivers build config 00:02:03.275 net/octeon_ep: not in enabled drivers build config 00:02:03.275 net/pcap: not in enabled drivers build config 00:02:03.275 net/pfe: not in enabled drivers build config 00:02:03.275 net/qede: not in enabled drivers build config 00:02:03.275 net/ring: not in enabled drivers build config 00:02:03.275 net/sfc: not in enabled drivers build config 00:02:03.275 net/softnic: not in enabled drivers build config 00:02:03.275 net/tap: not in enabled drivers build config 00:02:03.275 net/thunderx: not in enabled drivers build config 00:02:03.275 net/txgbe: not in enabled drivers build config 00:02:03.275 net/vdev_netvsc: not in enabled drivers build config 00:02:03.275 net/vhost: not in enabled drivers build config 00:02:03.275 net/virtio: not in enabled drivers build config 00:02:03.275 net/vmxnet3: not in enabled drivers build config 00:02:03.275 raw/*: missing internal dependency, "rawdev" 00:02:03.275 crypto/armv8: not in enabled drivers build config 00:02:03.275 crypto/bcmfs: not in enabled drivers build config 00:02:03.275 crypto/caam_jr: not in enabled drivers build config 00:02:03.275 crypto/ccp: not in enabled drivers build config 00:02:03.275 crypto/cnxk: not in enabled drivers build config 00:02:03.275 crypto/dpaa_sec: not in enabled drivers build config 00:02:03.275 crypto/dpaa2_sec: not in enabled drivers build config 00:02:03.275 crypto/ipsec_mb: not in enabled drivers build config 00:02:03.275 crypto/mlx5: not in enabled drivers build config 00:02:03.275 crypto/mvsam: not in enabled drivers build config 00:02:03.275 crypto/nitrox: not in enabled drivers build config 00:02:03.275 crypto/null: not in enabled drivers build config 00:02:03.275 crypto/octeontx: not in enabled drivers build config 00:02:03.275 crypto/openssl: not in enabled drivers build config 00:02:03.275 crypto/scheduler: not in enabled drivers build config 00:02:03.275 crypto/uadk: not in enabled drivers build config 00:02:03.275 crypto/virtio: not in enabled drivers build config 00:02:03.275 compress/isal: not in enabled drivers build config 00:02:03.275 compress/mlx5: not in enabled drivers build config 00:02:03.275 compress/nitrox: not in enabled drivers build config 00:02:03.275 compress/octeontx: not in enabled drivers build config 00:02:03.275 compress/zlib: not in enabled drivers build config 00:02:03.275 regex/*: missing internal dependency, "regexdev" 00:02:03.275 ml/*: missing internal dependency, "mldev" 00:02:03.275 vdpa/ifc: not in enabled drivers build config 00:02:03.275 vdpa/mlx5: not in enabled drivers build config 00:02:03.275 vdpa/nfp: not in enabled drivers build config 00:02:03.275 vdpa/sfc: not in enabled drivers build config 00:02:03.275 event/*: missing internal dependency, "eventdev" 00:02:03.275 baseband/*: missing internal dependency, "bbdev" 00:02:03.275 gpu/*: missing internal dependency, "gpudev" 00:02:03.275 00:02:03.275 00:02:03.532 Build targets in project: 85 00:02:03.532 00:02:03.532 DPDK 24.03.0 00:02:03.532 00:02:03.532 User defined options 00:02:03.532 buildtype : debug 00:02:03.532 default_library : shared 00:02:03.532 libdir : lib 00:02:03.532 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:03.532 b_sanitize : address 00:02:03.532 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:03.532 c_link_args : 00:02:03.532 cpu_instruction_set: native 00:02:03.532 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:03.532 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:03.532 enable_docs : false 00:02:03.532 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:03.532 enable_kmods : false 00:02:03.532 max_lcores : 128 00:02:03.532 tests : false 00:02:03.532 00:02:03.532 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.789 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:04.046 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:04.046 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:04.046 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:04.046 [4/268] Linking static target lib/librte_kvargs.a 00:02:04.046 [5/268] Linking static target lib/librte_log.a 00:02:04.046 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:04.303 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:04.303 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:04.304 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:04.562 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:04.562 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:04.562 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:04.562 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.562 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:04.819 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:04.819 [16/268] Linking static target lib/librte_telemetry.a 00:02:04.819 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:04.819 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:05.079 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.079 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.079 [21/268] Linking target lib/librte_log.so.24.1 00:02:05.079 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:05.079 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.079 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.340 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.340 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.340 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:05.340 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:05.340 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:05.340 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:05.598 [31/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:05.598 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.598 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.598 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.598 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:05.856 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:05.856 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.856 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:05.856 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:05.856 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.856 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:05.856 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:06.114 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:06.114 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:06.114 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:06.114 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:06.114 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.114 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:06.373 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:06.631 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:06.631 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:06.631 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:06.631 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:06.631 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:06.631 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:06.889 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:06.889 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:06.889 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:07.147 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.147 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:07.147 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:07.147 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:07.147 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:07.147 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:07.407 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:07.407 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:07.407 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:07.407 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:07.666 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:07.666 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:07.926 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:07.926 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:07.926 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:07.926 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:07.926 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:07.926 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:07.926 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:07.926 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:08.185 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:08.185 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:08.443 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:08.443 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:08.443 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:08.443 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:08.701 [85/268] Linking static target lib/librte_eal.a 00:02:08.701 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:08.701 [87/268] Linking static target lib/librte_ring.a 00:02:08.701 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:08.701 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:08.966 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:08.966 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:08.966 [92/268] Linking static target lib/librte_rcu.a 00:02:08.966 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:08.966 [94/268] Linking static target lib/librte_mempool.a 00:02:08.966 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:09.240 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:09.240 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.240 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:09.499 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:09.499 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:09.499 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.757 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:09.757 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:09.757 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:09.757 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:10.016 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:10.016 [107/268] Linking static target lib/librte_net.a 00:02:10.275 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:10.275 [109/268] Linking static target lib/librte_meter.a 00:02:10.275 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.534 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:10.534 [112/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.534 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:10.534 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:10.534 [115/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.534 [116/268] Linking static target lib/librte_mbuf.a 00:02:10.534 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:10.792 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.050 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:11.617 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:11.617 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:11.617 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:11.617 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:11.617 [124/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.875 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:11.875 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:11.875 [127/268] Linking static target lib/librte_pci.a 00:02:11.875 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:11.875 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:12.134 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:12.134 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:12.393 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.393 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:12.393 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:12.393 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:12.393 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:12.393 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:12.393 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:12.393 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:12.660 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:12.661 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:12.661 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:12.661 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:12.661 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:12.661 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:12.661 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:12.661 [147/268] Linking static target lib/librte_cmdline.a 00:02:12.931 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:13.190 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:13.190 [150/268] Linking static target lib/librte_timer.a 00:02:13.190 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:13.190 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:13.448 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:13.448 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:13.448 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:13.706 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:13.706 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.964 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:13.964 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:13.964 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:13.964 [161/268] Linking static target lib/librte_compressdev.a 00:02:13.964 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:13.964 [163/268] Linking static target lib/librte_ethdev.a 00:02:14.222 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:14.222 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:14.223 [166/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:14.223 [167/268] Linking static target lib/librte_dmadev.a 00:02:14.223 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.481 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:14.481 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:14.481 [171/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:14.481 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:14.481 [173/268] Linking static target lib/librte_hash.a 00:02:14.739 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:14.997 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:14.997 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.997 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:14.997 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:14.997 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.997 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:14.997 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:15.254 [182/268] Linking static target lib/librte_cryptodev.a 00:02:15.254 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:15.512 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:15.512 [185/268] Linking static target lib/librte_power.a 00:02:15.512 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:15.512 [187/268] Linking static target lib/librte_reorder.a 00:02:15.770 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:15.770 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:15.770 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:15.770 [191/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.770 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:15.770 [193/268] Linking static target lib/librte_security.a 00:02:16.335 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.609 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.609 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:16.609 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.872 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:16.872 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:16.872 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:17.130 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:17.388 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:17.388 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:17.388 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:17.646 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:17.646 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:17.646 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.646 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:17.646 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:17.909 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:17.909 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:17.909 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:17.909 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.909 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.909 [215/268] Linking static target drivers/librte_bus_pci.a 00:02:18.172 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:18.172 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.172 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.172 [219/268] Linking static target drivers/librte_bus_vdev.a 00:02:18.172 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:18.172 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:18.431 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.431 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:18.431 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:18.431 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:18.431 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:18.431 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.367 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:20.302 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.302 [230/268] Linking target lib/librte_eal.so.24.1 00:02:20.561 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:20.561 [232/268] Linking target lib/librte_meter.so.24.1 00:02:20.561 [233/268] Linking target lib/librte_timer.so.24.1 00:02:20.561 [234/268] Linking target lib/librte_pci.so.24.1 00:02:20.561 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:20.561 [236/268] Linking target lib/librte_ring.so.24.1 00:02:20.561 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:20.561 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:20.561 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:20.561 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:20.561 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:20.821 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:20.821 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:20.821 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:20.821 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:20.821 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:20.821 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:21.123 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:21.123 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:21.123 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:21.123 [251/268] Linking target lib/librte_net.so.24.1 00:02:21.123 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:21.123 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:21.123 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:21.381 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:21.381 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:21.381 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:21.381 [258/268] Linking target lib/librte_hash.so.24.1 00:02:21.381 [259/268] Linking target lib/librte_security.so.24.1 00:02:21.638 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:23.539 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.539 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:23.539 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:23.539 [264/268] Linking target lib/librte_power.so.24.1 00:02:23.796 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:23.796 [266/268] Linking static target lib/librte_vhost.a 00:02:25.696 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.696 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:25.696 INFO: autodetecting backend as ninja 00:02:25.696 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:27.067 CC lib/log/log.o 00:02:27.067 CC lib/log/log_flags.o 00:02:27.067 CC lib/log/log_deprecated.o 00:02:27.067 CC lib/ut/ut.o 00:02:27.067 CC lib/ut_mock/mock.o 00:02:27.067 LIB libspdk_log.a 00:02:27.324 LIB libspdk_ut.a 00:02:27.324 SO libspdk_log.so.7.0 00:02:27.324 LIB libspdk_ut_mock.a 00:02:27.324 SO libspdk_ut.so.2.0 00:02:27.324 SO libspdk_ut_mock.so.6.0 00:02:27.324 SYMLINK libspdk_log.so 00:02:27.324 SYMLINK libspdk_ut.so 00:02:27.324 SYMLINK libspdk_ut_mock.so 00:02:27.581 CC lib/util/base64.o 00:02:27.581 CC lib/util/bit_array.o 00:02:27.581 CC lib/util/cpuset.o 00:02:27.581 CC lib/util/crc32.o 00:02:27.581 CC lib/util/crc16.o 00:02:27.581 CC lib/util/crc32c.o 00:02:27.581 CC lib/ioat/ioat.o 00:02:27.581 CC lib/dma/dma.o 00:02:27.581 CXX lib/trace_parser/trace.o 00:02:27.581 CC lib/vfio_user/host/vfio_user_pci.o 00:02:27.581 CC lib/vfio_user/host/vfio_user.o 00:02:27.581 CC lib/util/crc32_ieee.o 00:02:27.581 CC lib/util/crc64.o 00:02:27.581 CC lib/util/dif.o 00:02:27.838 CC lib/util/fd.o 00:02:27.838 LIB libspdk_dma.a 00:02:27.838 CC lib/util/file.o 00:02:27.838 SO libspdk_dma.so.4.0 00:02:27.838 CC lib/util/hexlify.o 00:02:27.838 CC lib/util/iov.o 00:02:27.838 SYMLINK libspdk_dma.so 00:02:27.838 CC lib/util/math.o 00:02:27.838 CC lib/util/pipe.o 00:02:27.838 CC lib/util/strerror_tls.o 00:02:27.838 LIB libspdk_vfio_user.a 00:02:27.838 CC lib/util/string.o 00:02:27.838 CC lib/util/uuid.o 00:02:27.838 SO libspdk_vfio_user.so.5.0 00:02:28.096 CC lib/util/fd_group.o 00:02:28.096 LIB libspdk_ioat.a 00:02:28.096 SYMLINK libspdk_vfio_user.so 00:02:28.096 CC lib/util/xor.o 00:02:28.096 CC lib/util/zipf.o 00:02:28.096 SO libspdk_ioat.so.7.0 00:02:28.096 SYMLINK libspdk_ioat.so 00:02:28.353 LIB libspdk_util.a 00:02:28.666 SO libspdk_util.so.9.1 00:02:28.666 LIB libspdk_trace_parser.a 00:02:28.666 SO libspdk_trace_parser.so.5.0 00:02:28.666 SYMLINK libspdk_util.so 00:02:28.923 SYMLINK libspdk_trace_parser.so 00:02:28.923 CC lib/env_dpdk/env.o 00:02:28.923 CC lib/conf/conf.o 00:02:28.923 CC lib/env_dpdk/memory.o 00:02:28.923 CC lib/env_dpdk/pci.o 00:02:28.923 CC lib/env_dpdk/init.o 00:02:28.923 CC lib/rdma_utils/rdma_utils.o 00:02:28.923 CC lib/idxd/idxd.o 00:02:28.923 CC lib/json/json_parse.o 00:02:28.923 CC lib/vmd/vmd.o 00:02:28.923 CC lib/rdma_provider/common.o 00:02:29.182 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:29.182 CC lib/json/json_util.o 00:02:29.182 LIB libspdk_rdma_utils.a 00:02:29.182 LIB libspdk_conf.a 00:02:29.182 SO libspdk_rdma_utils.so.1.0 00:02:29.182 SO libspdk_conf.so.6.0 00:02:29.182 SYMLINK libspdk_conf.so 00:02:29.182 SYMLINK libspdk_rdma_utils.so 00:02:29.182 CC lib/idxd/idxd_user.o 00:02:29.182 CC lib/idxd/idxd_kernel.o 00:02:29.182 CC lib/env_dpdk/threads.o 00:02:29.182 LIB libspdk_rdma_provider.a 00:02:29.182 CC lib/env_dpdk/pci_ioat.o 00:02:29.182 SO libspdk_rdma_provider.so.6.0 00:02:29.439 SYMLINK libspdk_rdma_provider.so 00:02:29.439 CC lib/json/json_write.o 00:02:29.439 CC lib/env_dpdk/pci_virtio.o 00:02:29.439 CC lib/vmd/led.o 00:02:29.439 CC lib/env_dpdk/pci_vmd.o 00:02:29.439 CC lib/env_dpdk/pci_idxd.o 00:02:29.439 CC lib/env_dpdk/pci_event.o 00:02:29.439 CC lib/env_dpdk/sigbus_handler.o 00:02:29.697 LIB libspdk_idxd.a 00:02:29.697 CC lib/env_dpdk/pci_dpdk.o 00:02:29.697 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:29.697 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:29.697 SO libspdk_idxd.so.12.0 00:02:29.697 LIB libspdk_vmd.a 00:02:29.697 SYMLINK libspdk_idxd.so 00:02:29.697 LIB libspdk_json.a 00:02:29.697 SO libspdk_vmd.so.6.0 00:02:29.697 SO libspdk_json.so.6.0 00:02:29.697 SYMLINK libspdk_vmd.so 00:02:29.955 SYMLINK libspdk_json.so 00:02:30.213 CC lib/jsonrpc/jsonrpc_server.o 00:02:30.213 CC lib/jsonrpc/jsonrpc_client.o 00:02:30.213 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:30.213 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:30.472 LIB libspdk_jsonrpc.a 00:02:30.472 SO libspdk_jsonrpc.so.6.0 00:02:30.730 SYMLINK libspdk_jsonrpc.so 00:02:30.988 CC lib/rpc/rpc.o 00:02:30.988 LIB libspdk_env_dpdk.a 00:02:30.988 SO libspdk_env_dpdk.so.14.1 00:02:31.247 LIB libspdk_rpc.a 00:02:31.247 SYMLINK libspdk_env_dpdk.so 00:02:31.247 SO libspdk_rpc.so.6.0 00:02:31.247 SYMLINK libspdk_rpc.so 00:02:31.543 CC lib/notify/notify_rpc.o 00:02:31.543 CC lib/notify/notify.o 00:02:31.543 CC lib/trace/trace_flags.o 00:02:31.543 CC lib/trace/trace.o 00:02:31.543 CC lib/trace/trace_rpc.o 00:02:31.801 CC lib/keyring/keyring_rpc.o 00:02:31.801 CC lib/keyring/keyring.o 00:02:31.801 LIB libspdk_notify.a 00:02:31.801 SO libspdk_notify.so.6.0 00:02:32.059 LIB libspdk_trace.a 00:02:32.059 LIB libspdk_keyring.a 00:02:32.059 SYMLINK libspdk_notify.so 00:02:32.059 SO libspdk_trace.so.10.0 00:02:32.059 SO libspdk_keyring.so.1.0 00:02:32.059 SYMLINK libspdk_trace.so 00:02:32.059 SYMLINK libspdk_keyring.so 00:02:32.318 CC lib/sock/sock.o 00:02:32.318 CC lib/sock/sock_rpc.o 00:02:32.318 CC lib/thread/iobuf.o 00:02:32.318 CC lib/thread/thread.o 00:02:32.884 LIB libspdk_sock.a 00:02:32.884 SO libspdk_sock.so.10.0 00:02:33.142 SYMLINK libspdk_sock.so 00:02:33.399 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:33.399 CC lib/nvme/nvme_ctrlr.o 00:02:33.399 CC lib/nvme/nvme_ns.o 00:02:33.399 CC lib/nvme/nvme_fabric.o 00:02:33.399 CC lib/nvme/nvme_ns_cmd.o 00:02:33.399 CC lib/nvme/nvme_pcie_common.o 00:02:33.399 CC lib/nvme/nvme.o 00:02:33.399 CC lib/nvme/nvme_qpair.o 00:02:33.399 CC lib/nvme/nvme_pcie.o 00:02:34.332 CC lib/nvme/nvme_quirks.o 00:02:34.332 CC lib/nvme/nvme_transport.o 00:02:34.332 CC lib/nvme/nvme_discovery.o 00:02:34.332 LIB libspdk_thread.a 00:02:34.332 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:34.332 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:34.332 SO libspdk_thread.so.10.1 00:02:34.592 CC lib/nvme/nvme_tcp.o 00:02:34.592 SYMLINK libspdk_thread.so 00:02:34.592 CC lib/nvme/nvme_opal.o 00:02:34.592 CC lib/nvme/nvme_io_msg.o 00:02:34.592 CC lib/nvme/nvme_poll_group.o 00:02:34.592 CC lib/nvme/nvme_zns.o 00:02:34.850 CC lib/nvme/nvme_stubs.o 00:02:34.850 CC lib/nvme/nvme_auth.o 00:02:34.850 CC lib/nvme/nvme_cuse.o 00:02:35.107 CC lib/nvme/nvme_rdma.o 00:02:35.364 CC lib/accel/accel.o 00:02:35.364 CC lib/accel/accel_rpc.o 00:02:35.364 CC lib/blob/blobstore.o 00:02:35.364 CC lib/init/json_config.o 00:02:35.364 CC lib/accel/accel_sw.o 00:02:35.621 CC lib/blob/request.o 00:02:35.621 CC lib/init/subsystem.o 00:02:35.879 CC lib/init/subsystem_rpc.o 00:02:35.879 CC lib/blob/zeroes.o 00:02:35.879 CC lib/init/rpc.o 00:02:35.879 CC lib/virtio/virtio.o 00:02:35.879 CC lib/virtio/virtio_vhost_user.o 00:02:36.137 CC lib/virtio/virtio_vfio_user.o 00:02:36.137 CC lib/blob/blob_bs_dev.o 00:02:36.137 CC lib/virtio/virtio_pci.o 00:02:36.137 LIB libspdk_init.a 00:02:36.137 SO libspdk_init.so.5.0 00:02:36.137 SYMLINK libspdk_init.so 00:02:36.395 LIB libspdk_virtio.a 00:02:36.395 SO libspdk_virtio.so.7.0 00:02:36.395 CC lib/event/app.o 00:02:36.395 CC lib/event/log_rpc.o 00:02:36.395 CC lib/event/reactor.o 00:02:36.395 CC lib/event/app_rpc.o 00:02:36.395 CC lib/event/scheduler_static.o 00:02:36.672 SYMLINK libspdk_virtio.so 00:02:36.672 LIB libspdk_nvme.a 00:02:36.672 LIB libspdk_accel.a 00:02:36.672 SO libspdk_accel.so.15.1 00:02:36.930 SYMLINK libspdk_accel.so 00:02:36.930 SO libspdk_nvme.so.13.1 00:02:37.219 LIB libspdk_event.a 00:02:37.219 CC lib/bdev/bdev.o 00:02:37.219 CC lib/bdev/bdev_rpc.o 00:02:37.219 CC lib/bdev/bdev_zone.o 00:02:37.219 CC lib/bdev/part.o 00:02:37.219 CC lib/bdev/scsi_nvme.o 00:02:37.219 SYMLINK libspdk_nvme.so 00:02:37.219 SO libspdk_event.so.14.0 00:02:37.219 SYMLINK libspdk_event.so 00:02:39.748 LIB libspdk_blob.a 00:02:39.748 SO libspdk_blob.so.11.0 00:02:39.748 SYMLINK libspdk_blob.so 00:02:40.007 CC lib/lvol/lvol.o 00:02:40.007 CC lib/blobfs/blobfs.o 00:02:40.007 CC lib/blobfs/tree.o 00:02:40.266 LIB libspdk_bdev.a 00:02:40.525 SO libspdk_bdev.so.15.1 00:02:40.525 SYMLINK libspdk_bdev.so 00:02:40.783 CC lib/nvmf/ctrlr_discovery.o 00:02:40.783 CC lib/nbd/nbd.o 00:02:40.783 CC lib/nvmf/subsystem.o 00:02:40.783 CC lib/nvmf/ctrlr.o 00:02:40.783 CC lib/nvmf/ctrlr_bdev.o 00:02:40.783 CC lib/scsi/dev.o 00:02:40.783 CC lib/ftl/ftl_core.o 00:02:40.783 CC lib/ublk/ublk.o 00:02:41.041 LIB libspdk_blobfs.a 00:02:41.041 SO libspdk_blobfs.so.10.0 00:02:41.041 SYMLINK libspdk_blobfs.so 00:02:41.041 CC lib/scsi/lun.o 00:02:41.041 CC lib/scsi/port.o 00:02:41.041 LIB libspdk_lvol.a 00:02:41.041 SO libspdk_lvol.so.10.0 00:02:41.299 SYMLINK libspdk_lvol.so 00:02:41.299 CC lib/scsi/scsi.o 00:02:41.299 CC lib/nbd/nbd_rpc.o 00:02:41.299 CC lib/ftl/ftl_init.o 00:02:41.299 CC lib/ftl/ftl_layout.o 00:02:41.299 CC lib/scsi/scsi_bdev.o 00:02:41.299 CC lib/nvmf/nvmf.o 00:02:41.299 CC lib/nvmf/nvmf_rpc.o 00:02:41.299 LIB libspdk_nbd.a 00:02:41.557 SO libspdk_nbd.so.7.0 00:02:41.557 SYMLINK libspdk_nbd.so 00:02:41.557 CC lib/nvmf/transport.o 00:02:41.557 CC lib/scsi/scsi_pr.o 00:02:41.557 CC lib/ublk/ublk_rpc.o 00:02:41.557 CC lib/nvmf/tcp.o 00:02:41.815 CC lib/ftl/ftl_debug.o 00:02:41.815 LIB libspdk_ublk.a 00:02:41.815 SO libspdk_ublk.so.3.0 00:02:41.815 SYMLINK libspdk_ublk.so 00:02:41.815 CC lib/ftl/ftl_io.o 00:02:41.815 CC lib/scsi/scsi_rpc.o 00:02:42.072 CC lib/nvmf/stubs.o 00:02:42.072 CC lib/nvmf/mdns_server.o 00:02:42.072 CC lib/scsi/task.o 00:02:42.072 CC lib/ftl/ftl_sb.o 00:02:42.328 CC lib/nvmf/rdma.o 00:02:42.328 LIB libspdk_scsi.a 00:02:42.328 SO libspdk_scsi.so.9.0 00:02:42.328 CC lib/nvmf/auth.o 00:02:42.328 CC lib/ftl/ftl_l2p.o 00:02:42.328 CC lib/ftl/ftl_l2p_flat.o 00:02:42.328 CC lib/ftl/ftl_nv_cache.o 00:02:42.328 SYMLINK libspdk_scsi.so 00:02:42.586 CC lib/ftl/ftl_band.o 00:02:42.586 CC lib/ftl/ftl_band_ops.o 00:02:42.586 CC lib/ftl/ftl_writer.o 00:02:42.586 CC lib/iscsi/conn.o 00:02:42.586 CC lib/ftl/ftl_rq.o 00:02:42.846 CC lib/iscsi/init_grp.o 00:02:42.846 CC lib/iscsi/iscsi.o 00:02:42.846 CC lib/ftl/ftl_reloc.o 00:02:42.846 CC lib/ftl/ftl_l2p_cache.o 00:02:43.103 CC lib/ftl/ftl_p2l.o 00:02:43.360 CC lib/iscsi/md5.o 00:02:43.360 CC lib/iscsi/param.o 00:02:43.360 CC lib/iscsi/portal_grp.o 00:02:43.360 CC lib/iscsi/tgt_node.o 00:02:43.617 CC lib/iscsi/iscsi_subsystem.o 00:02:43.617 CC lib/iscsi/iscsi_rpc.o 00:02:43.617 CC lib/iscsi/task.o 00:02:43.617 CC lib/ftl/mngt/ftl_mngt.o 00:02:43.617 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:43.617 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:43.617 CC lib/vhost/vhost.o 00:02:43.875 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:43.875 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:43.875 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:43.875 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:43.875 CC lib/vhost/vhost_rpc.o 00:02:44.134 CC lib/vhost/vhost_scsi.o 00:02:44.134 CC lib/vhost/vhost_blk.o 00:02:44.134 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:44.134 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:44.134 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:44.134 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:44.134 CC lib/vhost/rte_vhost_user.o 00:02:44.391 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:44.391 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:44.391 CC lib/ftl/utils/ftl_conf.o 00:02:44.649 CC lib/ftl/utils/ftl_md.o 00:02:44.649 LIB libspdk_iscsi.a 00:02:44.649 CC lib/ftl/utils/ftl_mempool.o 00:02:44.649 CC lib/ftl/utils/ftl_bitmap.o 00:02:44.649 CC lib/ftl/utils/ftl_property.o 00:02:44.906 SO libspdk_iscsi.so.8.0 00:02:44.906 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:44.906 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:44.906 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:44.906 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:44.906 LIB libspdk_nvmf.a 00:02:45.169 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:45.169 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:45.169 SYMLINK libspdk_iscsi.so 00:02:45.169 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:45.169 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:45.169 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:45.169 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:45.169 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:45.169 SO libspdk_nvmf.so.18.1 00:02:45.169 CC lib/ftl/base/ftl_base_dev.o 00:02:45.169 CC lib/ftl/base/ftl_base_bdev.o 00:02:45.169 CC lib/ftl/ftl_trace.o 00:02:45.433 SYMLINK libspdk_nvmf.so 00:02:45.433 LIB libspdk_vhost.a 00:02:45.691 SO libspdk_vhost.so.8.0 00:02:45.691 LIB libspdk_ftl.a 00:02:45.691 SYMLINK libspdk_vhost.so 00:02:45.691 SO libspdk_ftl.so.9.0 00:02:46.259 SYMLINK libspdk_ftl.so 00:02:46.516 CC module/env_dpdk/env_dpdk_rpc.o 00:02:46.516 CC module/sock/posix/posix.o 00:02:46.516 CC module/accel/ioat/accel_ioat.o 00:02:46.774 CC module/keyring/file/keyring.o 00:02:46.774 CC module/accel/iaa/accel_iaa.o 00:02:46.774 CC module/keyring/linux/keyring.o 00:02:46.774 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:46.774 CC module/accel/dsa/accel_dsa.o 00:02:46.774 CC module/accel/error/accel_error.o 00:02:46.774 CC module/blob/bdev/blob_bdev.o 00:02:46.774 LIB libspdk_env_dpdk_rpc.a 00:02:46.774 SO libspdk_env_dpdk_rpc.so.6.0 00:02:46.774 CC module/keyring/file/keyring_rpc.o 00:02:46.774 SYMLINK libspdk_env_dpdk_rpc.so 00:02:46.774 CC module/keyring/linux/keyring_rpc.o 00:02:46.774 CC module/accel/error/accel_error_rpc.o 00:02:46.774 CC module/accel/ioat/accel_ioat_rpc.o 00:02:46.774 LIB libspdk_scheduler_dynamic.a 00:02:46.774 CC module/accel/iaa/accel_iaa_rpc.o 00:02:47.032 SO libspdk_scheduler_dynamic.so.4.0 00:02:47.032 LIB libspdk_keyring_file.a 00:02:47.032 CC module/accel/dsa/accel_dsa_rpc.o 00:02:47.032 LIB libspdk_accel_error.a 00:02:47.032 LIB libspdk_keyring_linux.a 00:02:47.032 LIB libspdk_accel_ioat.a 00:02:47.032 LIB libspdk_blob_bdev.a 00:02:47.032 SO libspdk_keyring_file.so.1.0 00:02:47.032 SYMLINK libspdk_scheduler_dynamic.so 00:02:47.032 SO libspdk_keyring_linux.so.1.0 00:02:47.032 SO libspdk_blob_bdev.so.11.0 00:02:47.032 SO libspdk_accel_ioat.so.6.0 00:02:47.032 SO libspdk_accel_error.so.2.0 00:02:47.032 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:47.032 LIB libspdk_accel_iaa.a 00:02:47.032 SYMLINK libspdk_keyring_file.so 00:02:47.032 SO libspdk_accel_iaa.so.3.0 00:02:47.032 SYMLINK libspdk_accel_error.so 00:02:47.032 SYMLINK libspdk_accel_ioat.so 00:02:47.032 SYMLINK libspdk_blob_bdev.so 00:02:47.032 LIB libspdk_accel_dsa.a 00:02:47.032 SYMLINK libspdk_keyring_linux.so 00:02:47.032 SYMLINK libspdk_accel_iaa.so 00:02:47.032 SO libspdk_accel_dsa.so.5.0 00:02:47.291 LIB libspdk_scheduler_dpdk_governor.a 00:02:47.291 SYMLINK libspdk_accel_dsa.so 00:02:47.291 CC module/scheduler/gscheduler/gscheduler.o 00:02:47.291 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:47.291 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:47.291 CC module/bdev/gpt/gpt.o 00:02:47.291 CC module/bdev/lvol/vbdev_lvol.o 00:02:47.291 CC module/bdev/null/bdev_null.o 00:02:47.291 CC module/bdev/error/vbdev_error.o 00:02:47.291 LIB libspdk_scheduler_gscheduler.a 00:02:47.291 CC module/bdev/delay/vbdev_delay.o 00:02:47.291 CC module/bdev/malloc/bdev_malloc.o 00:02:47.291 CC module/blobfs/bdev/blobfs_bdev.o 00:02:47.549 SO libspdk_scheduler_gscheduler.so.4.0 00:02:47.549 SYMLINK libspdk_scheduler_gscheduler.so 00:02:47.549 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:47.549 CC module/bdev/nvme/bdev_nvme.o 00:02:47.549 LIB libspdk_sock_posix.a 00:02:47.549 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:47.549 CC module/bdev/gpt/vbdev_gpt.o 00:02:47.549 SO libspdk_sock_posix.so.6.0 00:02:47.549 LIB libspdk_blobfs_bdev.a 00:02:47.807 CC module/bdev/null/bdev_null_rpc.o 00:02:47.807 CC module/bdev/error/vbdev_error_rpc.o 00:02:47.807 SO libspdk_blobfs_bdev.so.6.0 00:02:47.807 SYMLINK libspdk_sock_posix.so 00:02:47.807 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:47.807 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:47.807 SYMLINK libspdk_blobfs_bdev.so 00:02:47.807 CC module/bdev/nvme/nvme_rpc.o 00:02:47.807 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:47.807 LIB libspdk_bdev_error.a 00:02:47.807 LIB libspdk_bdev_null.a 00:02:47.807 LIB libspdk_bdev_gpt.a 00:02:47.807 SO libspdk_bdev_error.so.6.0 00:02:47.807 SO libspdk_bdev_null.so.6.0 00:02:48.065 SO libspdk_bdev_gpt.so.6.0 00:02:48.065 SYMLINK libspdk_bdev_error.so 00:02:48.065 SYMLINK libspdk_bdev_null.so 00:02:48.065 CC module/bdev/nvme/bdev_mdns_client.o 00:02:48.065 CC module/bdev/nvme/vbdev_opal.o 00:02:48.065 LIB libspdk_bdev_malloc.a 00:02:48.065 SYMLINK libspdk_bdev_gpt.so 00:02:48.065 LIB libspdk_bdev_delay.a 00:02:48.065 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:48.065 SO libspdk_bdev_malloc.so.6.0 00:02:48.065 SO libspdk_bdev_delay.so.6.0 00:02:48.065 LIB libspdk_bdev_lvol.a 00:02:48.065 SYMLINK libspdk_bdev_malloc.so 00:02:48.065 SYMLINK libspdk_bdev_delay.so 00:02:48.065 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:48.322 CC module/bdev/passthru/vbdev_passthru.o 00:02:48.322 SO libspdk_bdev_lvol.so.6.0 00:02:48.322 CC module/bdev/raid/bdev_raid.o 00:02:48.322 CC module/bdev/raid/bdev_raid_rpc.o 00:02:48.322 SYMLINK libspdk_bdev_lvol.so 00:02:48.322 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:48.322 CC module/bdev/split/vbdev_split.o 00:02:48.322 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:48.579 CC module/bdev/xnvme/bdev_xnvme.o 00:02:48.579 CC module/bdev/aio/bdev_aio.o 00:02:48.579 CC module/bdev/raid/bdev_raid_sb.o 00:02:48.579 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:48.579 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:48.579 CC module/bdev/split/vbdev_split_rpc.o 00:02:48.579 LIB libspdk_bdev_passthru.a 00:02:48.579 CC module/bdev/ftl/bdev_ftl.o 00:02:48.839 SO libspdk_bdev_passthru.so.6.0 00:02:48.839 LIB libspdk_bdev_xnvme.a 00:02:48.839 CC module/bdev/raid/raid0.o 00:02:48.839 LIB libspdk_bdev_zone_block.a 00:02:48.839 SO libspdk_bdev_xnvme.so.3.0 00:02:48.839 SO libspdk_bdev_zone_block.so.6.0 00:02:48.839 SYMLINK libspdk_bdev_passthru.so 00:02:48.839 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:48.839 CC module/bdev/aio/bdev_aio_rpc.o 00:02:48.839 LIB libspdk_bdev_split.a 00:02:48.839 SYMLINK libspdk_bdev_xnvme.so 00:02:48.839 SO libspdk_bdev_split.so.6.0 00:02:48.839 SYMLINK libspdk_bdev_zone_block.so 00:02:48.839 CC module/bdev/raid/raid1.o 00:02:48.839 CC module/bdev/raid/concat.o 00:02:48.839 SYMLINK libspdk_bdev_split.so 00:02:49.104 LIB libspdk_bdev_aio.a 00:02:49.104 CC module/bdev/iscsi/bdev_iscsi.o 00:02:49.104 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:49.104 SO libspdk_bdev_aio.so.6.0 00:02:49.104 LIB libspdk_bdev_ftl.a 00:02:49.104 SO libspdk_bdev_ftl.so.6.0 00:02:49.104 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:49.104 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:49.104 SYMLINK libspdk_bdev_aio.so 00:02:49.104 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:49.104 SYMLINK libspdk_bdev_ftl.so 00:02:49.373 LIB libspdk_bdev_iscsi.a 00:02:49.373 LIB libspdk_bdev_raid.a 00:02:49.633 SO libspdk_bdev_iscsi.so.6.0 00:02:49.633 SO libspdk_bdev_raid.so.6.0 00:02:49.633 SYMLINK libspdk_bdev_iscsi.so 00:02:49.633 SYMLINK libspdk_bdev_raid.so 00:02:49.633 LIB libspdk_bdev_virtio.a 00:02:49.892 SO libspdk_bdev_virtio.so.6.0 00:02:49.892 SYMLINK libspdk_bdev_virtio.so 00:02:50.458 LIB libspdk_bdev_nvme.a 00:02:50.458 SO libspdk_bdev_nvme.so.7.0 00:02:50.717 SYMLINK libspdk_bdev_nvme.so 00:02:51.285 CC module/event/subsystems/iobuf/iobuf.o 00:02:51.286 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:51.286 CC module/event/subsystems/vmd/vmd.o 00:02:51.286 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:51.286 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:51.286 CC module/event/subsystems/sock/sock.o 00:02:51.286 CC module/event/subsystems/scheduler/scheduler.o 00:02:51.286 CC module/event/subsystems/keyring/keyring.o 00:02:51.544 LIB libspdk_event_vhost_blk.a 00:02:51.544 LIB libspdk_event_sock.a 00:02:51.544 LIB libspdk_event_vmd.a 00:02:51.544 LIB libspdk_event_iobuf.a 00:02:51.544 LIB libspdk_event_scheduler.a 00:02:51.544 SO libspdk_event_vhost_blk.so.3.0 00:02:51.544 LIB libspdk_event_keyring.a 00:02:51.544 SO libspdk_event_sock.so.5.0 00:02:51.544 SO libspdk_event_vmd.so.6.0 00:02:51.544 SO libspdk_event_scheduler.so.4.0 00:02:51.544 SO libspdk_event_iobuf.so.3.0 00:02:51.544 SO libspdk_event_keyring.so.1.0 00:02:51.544 SYMLINK libspdk_event_vhost_blk.so 00:02:51.544 SYMLINK libspdk_event_vmd.so 00:02:51.544 SYMLINK libspdk_event_scheduler.so 00:02:51.544 SYMLINK libspdk_event_keyring.so 00:02:51.544 SYMLINK libspdk_event_sock.so 00:02:51.544 SYMLINK libspdk_event_iobuf.so 00:02:52.132 CC module/event/subsystems/accel/accel.o 00:02:52.132 LIB libspdk_event_accel.a 00:02:52.132 SO libspdk_event_accel.so.6.0 00:02:52.390 SYMLINK libspdk_event_accel.so 00:02:52.648 CC module/event/subsystems/bdev/bdev.o 00:02:52.906 LIB libspdk_event_bdev.a 00:02:52.906 SO libspdk_event_bdev.so.6.0 00:02:52.906 SYMLINK libspdk_event_bdev.so 00:02:53.472 CC module/event/subsystems/scsi/scsi.o 00:02:53.472 CC module/event/subsystems/nbd/nbd.o 00:02:53.472 CC module/event/subsystems/ublk/ublk.o 00:02:53.472 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:53.472 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:53.472 LIB libspdk_event_scsi.a 00:02:53.472 LIB libspdk_event_nbd.a 00:02:53.472 SO libspdk_event_scsi.so.6.0 00:02:53.472 LIB libspdk_event_ublk.a 00:02:53.472 SO libspdk_event_nbd.so.6.0 00:02:53.472 SO libspdk_event_ublk.so.3.0 00:02:53.472 SYMLINK libspdk_event_scsi.so 00:02:53.729 SYMLINK libspdk_event_nbd.so 00:02:53.729 SYMLINK libspdk_event_ublk.so 00:02:53.729 LIB libspdk_event_nvmf.a 00:02:53.729 SO libspdk_event_nvmf.so.6.0 00:02:53.729 SYMLINK libspdk_event_nvmf.so 00:02:53.988 CC module/event/subsystems/iscsi/iscsi.o 00:02:53.988 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:53.988 LIB libspdk_event_vhost_scsi.a 00:02:53.988 LIB libspdk_event_iscsi.a 00:02:53.988 SO libspdk_event_iscsi.so.6.0 00:02:53.988 SO libspdk_event_vhost_scsi.so.3.0 00:02:54.247 SYMLINK libspdk_event_vhost_scsi.so 00:02:54.247 SYMLINK libspdk_event_iscsi.so 00:02:54.505 SO libspdk.so.6.0 00:02:54.505 SYMLINK libspdk.so 00:02:54.763 CC app/trace_record/trace_record.o 00:02:54.763 TEST_HEADER include/spdk/accel.h 00:02:54.763 TEST_HEADER include/spdk/accel_module.h 00:02:54.763 CXX app/trace/trace.o 00:02:54.763 TEST_HEADER include/spdk/assert.h 00:02:54.763 CC test/rpc_client/rpc_client_test.o 00:02:54.763 TEST_HEADER include/spdk/barrier.h 00:02:54.763 TEST_HEADER include/spdk/base64.h 00:02:54.763 TEST_HEADER include/spdk/bdev.h 00:02:54.763 TEST_HEADER include/spdk/bdev_module.h 00:02:54.763 TEST_HEADER include/spdk/bdev_zone.h 00:02:54.763 TEST_HEADER include/spdk/bit_array.h 00:02:54.763 TEST_HEADER include/spdk/bit_pool.h 00:02:54.763 TEST_HEADER include/spdk/blob_bdev.h 00:02:54.763 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:54.763 TEST_HEADER include/spdk/blobfs.h 00:02:54.763 TEST_HEADER include/spdk/blob.h 00:02:54.763 TEST_HEADER include/spdk/conf.h 00:02:54.763 TEST_HEADER include/spdk/config.h 00:02:54.763 TEST_HEADER include/spdk/cpuset.h 00:02:54.763 TEST_HEADER include/spdk/crc16.h 00:02:54.763 TEST_HEADER include/spdk/crc32.h 00:02:54.763 TEST_HEADER include/spdk/crc64.h 00:02:54.763 TEST_HEADER include/spdk/dif.h 00:02:54.763 CC app/nvmf_tgt/nvmf_main.o 00:02:54.763 TEST_HEADER include/spdk/dma.h 00:02:54.763 TEST_HEADER include/spdk/endian.h 00:02:54.763 TEST_HEADER include/spdk/env_dpdk.h 00:02:54.763 TEST_HEADER include/spdk/env.h 00:02:54.763 TEST_HEADER include/spdk/event.h 00:02:54.763 TEST_HEADER include/spdk/fd_group.h 00:02:54.763 TEST_HEADER include/spdk/fd.h 00:02:54.763 TEST_HEADER include/spdk/file.h 00:02:54.763 TEST_HEADER include/spdk/ftl.h 00:02:54.763 TEST_HEADER include/spdk/gpt_spec.h 00:02:54.763 TEST_HEADER include/spdk/hexlify.h 00:02:54.763 TEST_HEADER include/spdk/histogram_data.h 00:02:54.763 TEST_HEADER include/spdk/idxd.h 00:02:54.763 TEST_HEADER include/spdk/idxd_spec.h 00:02:54.763 TEST_HEADER include/spdk/init.h 00:02:54.763 CC test/thread/poller_perf/poller_perf.o 00:02:54.763 TEST_HEADER include/spdk/ioat.h 00:02:54.763 TEST_HEADER include/spdk/ioat_spec.h 00:02:54.763 TEST_HEADER include/spdk/iscsi_spec.h 00:02:54.763 TEST_HEADER include/spdk/json.h 00:02:54.763 TEST_HEADER include/spdk/jsonrpc.h 00:02:54.763 TEST_HEADER include/spdk/keyring.h 00:02:54.763 TEST_HEADER include/spdk/keyring_module.h 00:02:54.763 TEST_HEADER include/spdk/likely.h 00:02:54.763 TEST_HEADER include/spdk/log.h 00:02:54.763 TEST_HEADER include/spdk/lvol.h 00:02:54.763 TEST_HEADER include/spdk/memory.h 00:02:54.763 TEST_HEADER include/spdk/mmio.h 00:02:54.763 TEST_HEADER include/spdk/nbd.h 00:02:54.763 TEST_HEADER include/spdk/notify.h 00:02:54.763 TEST_HEADER include/spdk/nvme.h 00:02:54.763 CC examples/util/zipf/zipf.o 00:02:54.763 TEST_HEADER include/spdk/nvme_intel.h 00:02:54.763 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:54.763 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:54.763 TEST_HEADER include/spdk/nvme_spec.h 00:02:54.763 CC test/app/bdev_svc/bdev_svc.o 00:02:54.763 TEST_HEADER include/spdk/nvme_zns.h 00:02:54.763 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:54.763 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:54.763 TEST_HEADER include/spdk/nvmf.h 00:02:54.763 TEST_HEADER include/spdk/nvmf_spec.h 00:02:54.763 TEST_HEADER include/spdk/nvmf_transport.h 00:02:54.763 TEST_HEADER include/spdk/opal.h 00:02:54.763 CC test/dma/test_dma/test_dma.o 00:02:54.763 TEST_HEADER include/spdk/opal_spec.h 00:02:54.763 TEST_HEADER include/spdk/pci_ids.h 00:02:54.763 TEST_HEADER include/spdk/pipe.h 00:02:54.763 TEST_HEADER include/spdk/queue.h 00:02:54.763 TEST_HEADER include/spdk/reduce.h 00:02:54.763 TEST_HEADER include/spdk/rpc.h 00:02:54.763 TEST_HEADER include/spdk/scheduler.h 00:02:54.763 TEST_HEADER include/spdk/scsi.h 00:02:54.763 TEST_HEADER include/spdk/scsi_spec.h 00:02:54.763 TEST_HEADER include/spdk/sock.h 00:02:54.763 TEST_HEADER include/spdk/stdinc.h 00:02:54.763 TEST_HEADER include/spdk/string.h 00:02:54.763 TEST_HEADER include/spdk/thread.h 00:02:54.763 TEST_HEADER include/spdk/trace.h 00:02:54.763 TEST_HEADER include/spdk/trace_parser.h 00:02:54.763 TEST_HEADER include/spdk/tree.h 00:02:54.763 CC test/env/mem_callbacks/mem_callbacks.o 00:02:54.763 TEST_HEADER include/spdk/ublk.h 00:02:54.763 TEST_HEADER include/spdk/util.h 00:02:54.763 TEST_HEADER include/spdk/uuid.h 00:02:54.763 TEST_HEADER include/spdk/version.h 00:02:54.763 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:54.763 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:54.763 LINK rpc_client_test 00:02:54.763 TEST_HEADER include/spdk/vhost.h 00:02:54.763 TEST_HEADER include/spdk/vmd.h 00:02:55.021 TEST_HEADER include/spdk/xor.h 00:02:55.021 TEST_HEADER include/spdk/zipf.h 00:02:55.021 CXX test/cpp_headers/accel.o 00:02:55.021 LINK poller_perf 00:02:55.021 LINK spdk_trace_record 00:02:55.021 LINK nvmf_tgt 00:02:55.021 LINK zipf 00:02:55.021 LINK bdev_svc 00:02:55.021 CXX test/cpp_headers/accel_module.o 00:02:55.021 CXX test/cpp_headers/assert.o 00:02:55.021 CXX test/cpp_headers/barrier.o 00:02:55.305 LINK spdk_trace 00:02:55.305 CXX test/cpp_headers/base64.o 00:02:55.305 LINK test_dma 00:02:55.305 CC test/app/histogram_perf/histogram_perf.o 00:02:55.305 CXX test/cpp_headers/bdev.o 00:02:55.305 CC examples/vmd/lsvmd/lsvmd.o 00:02:55.305 CC examples/ioat/perf/perf.o 00:02:55.564 CC examples/idxd/perf/perf.o 00:02:55.564 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:55.564 LINK mem_callbacks 00:02:55.564 CC examples/ioat/verify/verify.o 00:02:55.564 CC app/iscsi_tgt/iscsi_tgt.o 00:02:55.564 CXX test/cpp_headers/bdev_module.o 00:02:55.564 LINK histogram_perf 00:02:55.564 LINK lsvmd 00:02:55.564 CXX test/cpp_headers/bdev_zone.o 00:02:55.564 LINK ioat_perf 00:02:55.822 LINK verify 00:02:55.822 CXX test/cpp_headers/bit_array.o 00:02:55.822 CC test/env/vtophys/vtophys.o 00:02:55.822 LINK iscsi_tgt 00:02:55.822 LINK idxd_perf 00:02:55.822 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:55.822 CC examples/vmd/led/led.o 00:02:55.822 LINK vtophys 00:02:55.822 CXX test/cpp_headers/bit_pool.o 00:02:55.822 LINK nvme_fuzz 00:02:56.081 CC examples/thread/thread/thread_ex.o 00:02:56.081 CC examples/sock/hello_world/hello_sock.o 00:02:56.081 CC test/app/jsoncat/jsoncat.o 00:02:56.081 LINK led 00:02:56.081 LINK interrupt_tgt 00:02:56.081 CXX test/cpp_headers/blob_bdev.o 00:02:56.081 CC test/app/stub/stub.o 00:02:56.081 CC app/spdk_tgt/spdk_tgt.o 00:02:56.081 LINK jsoncat 00:02:56.081 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:56.338 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:56.338 LINK thread 00:02:56.338 LINK hello_sock 00:02:56.338 CXX test/cpp_headers/blobfs_bdev.o 00:02:56.338 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:56.338 LINK env_dpdk_post_init 00:02:56.338 LINK stub 00:02:56.338 CXX test/cpp_headers/blobfs.o 00:02:56.338 LINK spdk_tgt 00:02:56.595 CC test/event/event_perf/event_perf.o 00:02:56.595 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:56.595 CXX test/cpp_headers/blob.o 00:02:56.595 CXX test/cpp_headers/conf.o 00:02:56.595 CC test/event/reactor/reactor.o 00:02:56.595 LINK event_perf 00:02:56.595 CC test/env/memory/memory_ut.o 00:02:56.595 CC examples/nvme/hello_world/hello_world.o 00:02:56.595 CXX test/cpp_headers/config.o 00:02:56.852 CC examples/accel/perf/accel_perf.o 00:02:56.852 CC app/spdk_lspci/spdk_lspci.o 00:02:56.852 LINK reactor 00:02:56.852 CXX test/cpp_headers/cpuset.o 00:02:56.852 CC examples/blob/hello_world/hello_blob.o 00:02:56.852 LINK spdk_lspci 00:02:56.852 CC test/event/reactor_perf/reactor_perf.o 00:02:56.852 CXX test/cpp_headers/crc16.o 00:02:56.852 LINK vhost_fuzz 00:02:56.852 LINK hello_world 00:02:57.110 LINK reactor_perf 00:02:57.110 CXX test/cpp_headers/crc32.o 00:02:57.110 CC test/nvme/aer/aer.o 00:02:57.110 LINK hello_blob 00:02:57.110 CC app/spdk_nvme_perf/perf.o 00:02:57.110 CC app/spdk_nvme_identify/identify.o 00:02:57.110 CXX test/cpp_headers/crc64.o 00:02:57.367 CC examples/nvme/reconnect/reconnect.o 00:02:57.367 CC test/event/app_repeat/app_repeat.o 00:02:57.367 LINK accel_perf 00:02:57.367 CXX test/cpp_headers/dif.o 00:02:57.367 LINK aer 00:02:57.367 LINK app_repeat 00:02:57.625 CC examples/blob/cli/blobcli.o 00:02:57.625 CXX test/cpp_headers/dma.o 00:02:57.625 CC app/spdk_nvme_discover/discovery_aer.o 00:02:57.626 CC test/nvme/reset/reset.o 00:02:57.626 LINK reconnect 00:02:57.884 CXX test/cpp_headers/endian.o 00:02:57.884 CC test/event/scheduler/scheduler.o 00:02:57.884 LINK memory_ut 00:02:57.884 LINK spdk_nvme_discover 00:02:57.884 CXX test/cpp_headers/env_dpdk.o 00:02:57.884 LINK reset 00:02:58.141 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:58.141 LINK scheduler 00:02:58.141 LINK blobcli 00:02:58.141 CXX test/cpp_headers/env.o 00:02:58.141 LINK spdk_nvme_perf 00:02:58.141 CC app/spdk_top/spdk_top.o 00:02:58.141 CC test/env/pci/pci_ut.o 00:02:58.399 CC test/nvme/sgl/sgl.o 00:02:58.399 CXX test/cpp_headers/event.o 00:02:58.399 LINK iscsi_fuzz 00:02:58.399 LINK spdk_nvme_identify 00:02:58.399 CXX test/cpp_headers/fd_group.o 00:02:58.399 CC test/nvme/e2edp/nvme_dp.o 00:02:58.399 CC examples/bdev/hello_world/hello_bdev.o 00:02:58.399 CXX test/cpp_headers/fd.o 00:02:58.657 CC examples/nvme/arbitration/arbitration.o 00:02:58.657 CC examples/nvme/hotplug/hotplug.o 00:02:58.657 LINK sgl 00:02:58.657 LINK pci_ut 00:02:58.657 LINK nvme_dp 00:02:58.657 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:58.657 CXX test/cpp_headers/file.o 00:02:58.657 LINK nvme_manage 00:02:58.657 LINK hello_bdev 00:02:58.915 CXX test/cpp_headers/ftl.o 00:02:58.915 LINK hotplug 00:02:58.915 CC test/nvme/overhead/overhead.o 00:02:58.915 LINK cmb_copy 00:02:58.915 LINK arbitration 00:02:58.915 CC test/nvme/err_injection/err_injection.o 00:02:58.915 CC test/nvme/startup/startup.o 00:02:58.915 CXX test/cpp_headers/gpt_spec.o 00:02:58.915 CXX test/cpp_headers/hexlify.o 00:02:59.173 CC test/accel/dif/dif.o 00:02:59.173 CC examples/bdev/bdevperf/bdevperf.o 00:02:59.173 CC test/nvme/reserve/reserve.o 00:02:59.173 LINK err_injection 00:02:59.174 LINK overhead 00:02:59.174 CC examples/nvme/abort/abort.o 00:02:59.174 LINK startup 00:02:59.174 CXX test/cpp_headers/histogram_data.o 00:02:59.174 LINK spdk_top 00:02:59.174 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:59.431 CXX test/cpp_headers/idxd.o 00:02:59.431 LINK reserve 00:02:59.431 CC test/nvme/simple_copy/simple_copy.o 00:02:59.431 CC test/nvme/boot_partition/boot_partition.o 00:02:59.431 CC test/nvme/connect_stress/connect_stress.o 00:02:59.431 LINK pmr_persistence 00:02:59.431 CXX test/cpp_headers/idxd_spec.o 00:02:59.689 CC app/vhost/vhost.o 00:02:59.689 LINK dif 00:02:59.689 LINK abort 00:02:59.689 LINK boot_partition 00:02:59.689 CC test/nvme/compliance/nvme_compliance.o 00:02:59.689 CXX test/cpp_headers/init.o 00:02:59.689 LINK connect_stress 00:02:59.689 LINK simple_copy 00:02:59.689 CC app/spdk_dd/spdk_dd.o 00:02:59.689 LINK vhost 00:02:59.947 CXX test/cpp_headers/ioat.o 00:02:59.947 CC test/nvme/fused_ordering/fused_ordering.o 00:02:59.947 CC app/fio/nvme/fio_plugin.o 00:02:59.947 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:59.947 CC test/nvme/fdp/fdp.o 00:02:59.947 CXX test/cpp_headers/ioat_spec.o 00:02:59.947 LINK bdevperf 00:02:59.947 LINK nvme_compliance 00:03:00.205 CC test/blobfs/mkfs/mkfs.o 00:03:00.205 LINK fused_ordering 00:03:00.205 CXX test/cpp_headers/iscsi_spec.o 00:03:00.205 LINK spdk_dd 00:03:00.205 LINK doorbell_aers 00:03:00.205 CXX test/cpp_headers/json.o 00:03:00.205 CC test/lvol/esnap/esnap.o 00:03:00.205 LINK mkfs 00:03:00.462 CXX test/cpp_headers/jsonrpc.o 00:03:00.462 LINK fdp 00:03:00.462 CXX test/cpp_headers/keyring.o 00:03:00.462 CC test/nvme/cuse/cuse.o 00:03:00.462 CC examples/nvmf/nvmf/nvmf.o 00:03:00.462 CXX test/cpp_headers/keyring_module.o 00:03:00.462 CC app/fio/bdev/fio_plugin.o 00:03:00.462 CXX test/cpp_headers/likely.o 00:03:00.462 CXX test/cpp_headers/log.o 00:03:00.720 CXX test/cpp_headers/lvol.o 00:03:00.720 CXX test/cpp_headers/memory.o 00:03:00.720 CC test/bdev/bdevio/bdevio.o 00:03:00.720 LINK spdk_nvme 00:03:00.720 CXX test/cpp_headers/mmio.o 00:03:00.720 CXX test/cpp_headers/nbd.o 00:03:00.720 CXX test/cpp_headers/notify.o 00:03:00.720 CXX test/cpp_headers/nvme.o 00:03:00.720 CXX test/cpp_headers/nvme_intel.o 00:03:00.720 CXX test/cpp_headers/nvme_ocssd.o 00:03:00.977 LINK nvmf 00:03:00.977 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:00.977 CXX test/cpp_headers/nvme_spec.o 00:03:00.977 CXX test/cpp_headers/nvme_zns.o 00:03:00.977 CXX test/cpp_headers/nvmf_cmd.o 00:03:00.977 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:00.977 CXX test/cpp_headers/nvmf.o 00:03:00.977 CXX test/cpp_headers/nvmf_spec.o 00:03:00.977 CXX test/cpp_headers/nvmf_transport.o 00:03:00.977 LINK bdevio 00:03:01.235 CXX test/cpp_headers/opal.o 00:03:01.235 LINK spdk_bdev 00:03:01.235 CXX test/cpp_headers/opal_spec.o 00:03:01.235 CXX test/cpp_headers/pci_ids.o 00:03:01.235 CXX test/cpp_headers/pipe.o 00:03:01.235 CXX test/cpp_headers/queue.o 00:03:01.235 CXX test/cpp_headers/reduce.o 00:03:01.235 CXX test/cpp_headers/rpc.o 00:03:01.235 CXX test/cpp_headers/scheduler.o 00:03:01.235 CXX test/cpp_headers/scsi.o 00:03:01.235 CXX test/cpp_headers/scsi_spec.o 00:03:01.235 CXX test/cpp_headers/sock.o 00:03:01.235 CXX test/cpp_headers/stdinc.o 00:03:01.492 CXX test/cpp_headers/string.o 00:03:01.492 CXX test/cpp_headers/thread.o 00:03:01.492 CXX test/cpp_headers/trace.o 00:03:01.492 CXX test/cpp_headers/trace_parser.o 00:03:01.492 CXX test/cpp_headers/tree.o 00:03:01.492 CXX test/cpp_headers/ublk.o 00:03:01.492 CXX test/cpp_headers/util.o 00:03:01.492 CXX test/cpp_headers/uuid.o 00:03:01.492 CXX test/cpp_headers/version.o 00:03:01.492 CXX test/cpp_headers/vfio_user_pci.o 00:03:01.492 CXX test/cpp_headers/vfio_user_spec.o 00:03:01.492 CXX test/cpp_headers/vhost.o 00:03:01.492 CXX test/cpp_headers/vmd.o 00:03:01.492 CXX test/cpp_headers/xor.o 00:03:01.750 CXX test/cpp_headers/zipf.o 00:03:01.750 LINK cuse 00:03:07.020 LINK esnap 00:03:07.020 00:03:07.020 real 1m14.569s 00:03:07.020 user 6m51.006s 00:03:07.020 sys 1m37.038s 00:03:07.020 14:58:44 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:07.020 14:58:44 make -- common/autotest_common.sh@10 -- $ set +x 00:03:07.020 ************************************ 00:03:07.020 END TEST make 00:03:07.020 ************************************ 00:03:07.020 14:58:44 -- common/autotest_common.sh@1142 -- $ return 0 00:03:07.020 14:58:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:07.020 14:58:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:07.020 14:58:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:07.020 14:58:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.020 14:58:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:07.020 14:58:44 -- pm/common@44 -- $ pid=5396 00:03:07.020 14:58:44 -- pm/common@50 -- $ kill -TERM 5396 00:03:07.020 14:58:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.020 14:58:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:07.020 14:58:44 -- pm/common@44 -- $ pid=5398 00:03:07.020 14:58:44 -- pm/common@50 -- $ kill -TERM 5398 00:03:07.020 14:58:45 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:07.020 14:58:45 -- nvmf/common.sh@7 -- # uname -s 00:03:07.020 14:58:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:07.020 14:58:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:07.020 14:58:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:07.020 14:58:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:07.020 14:58:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:07.020 14:58:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:07.020 14:58:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:07.020 14:58:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:07.020 14:58:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:07.020 14:58:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:07.020 14:58:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7fb24378-06dc-4546-ad9f-378969c62fd9 00:03:07.020 14:58:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=7fb24378-06dc-4546-ad9f-378969c62fd9 00:03:07.020 14:58:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:07.020 14:58:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:07.020 14:58:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:07.020 14:58:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:07.020 14:58:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:07.020 14:58:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:07.020 14:58:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:07.020 14:58:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:07.020 14:58:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.020 14:58:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.020 14:58:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.020 14:58:45 -- paths/export.sh@5 -- # export PATH 00:03:07.020 14:58:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.020 14:58:45 -- nvmf/common.sh@47 -- # : 0 00:03:07.020 14:58:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:07.020 14:58:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:07.020 14:58:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:07.020 14:58:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:07.020 14:58:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:07.020 14:58:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:07.020 14:58:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:07.020 14:58:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:07.020 14:58:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:07.020 14:58:45 -- spdk/autotest.sh@32 -- # uname -s 00:03:07.020 14:58:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:07.020 14:58:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:07.020 14:58:45 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:07.020 14:58:45 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:07.021 14:58:45 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:07.021 14:58:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:07.279 14:58:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:07.279 14:58:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:07.279 14:58:45 -- spdk/autotest.sh@48 -- # udevadm_pid=53947 00:03:07.279 14:58:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:07.279 14:58:45 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:07.279 14:58:45 -- pm/common@17 -- # local monitor 00:03:07.279 14:58:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.279 14:58:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.279 14:58:45 -- pm/common@25 -- # sleep 1 00:03:07.279 14:58:45 -- pm/common@21 -- # date +%s 00:03:07.279 14:58:45 -- pm/common@21 -- # date +%s 00:03:07.279 14:58:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721055525 00:03:07.279 14:58:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721055525 00:03:07.279 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721055525_collect-vmstat.pm.log 00:03:07.279 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721055525_collect-cpu-load.pm.log 00:03:08.215 14:58:46 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:08.215 14:58:46 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:08.215 14:58:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:08.215 14:58:46 -- common/autotest_common.sh@10 -- # set +x 00:03:08.215 14:58:46 -- spdk/autotest.sh@59 -- # create_test_list 00:03:08.215 14:58:46 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:08.215 14:58:46 -- common/autotest_common.sh@10 -- # set +x 00:03:08.215 14:58:46 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:08.215 14:58:46 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:08.215 14:58:46 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:08.215 14:58:46 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:08.215 14:58:46 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:08.215 14:58:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:08.215 14:58:46 -- common/autotest_common.sh@1455 -- # uname 00:03:08.215 14:58:46 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:08.215 14:58:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:08.215 14:58:46 -- common/autotest_common.sh@1475 -- # uname 00:03:08.215 14:58:46 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:08.215 14:58:46 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:08.215 14:58:46 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:08.215 14:58:46 -- spdk/autotest.sh@72 -- # hash lcov 00:03:08.215 14:58:46 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:08.215 14:58:46 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:08.215 --rc lcov_branch_coverage=1 00:03:08.215 --rc lcov_function_coverage=1 00:03:08.215 --rc genhtml_branch_coverage=1 00:03:08.215 --rc genhtml_function_coverage=1 00:03:08.215 --rc genhtml_legend=1 00:03:08.215 --rc geninfo_all_blocks=1 00:03:08.215 ' 00:03:08.215 14:58:46 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:08.215 --rc lcov_branch_coverage=1 00:03:08.215 --rc lcov_function_coverage=1 00:03:08.215 --rc genhtml_branch_coverage=1 00:03:08.215 --rc genhtml_function_coverage=1 00:03:08.215 --rc genhtml_legend=1 00:03:08.215 --rc geninfo_all_blocks=1 00:03:08.215 ' 00:03:08.215 14:58:46 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:08.215 --rc lcov_branch_coverage=1 00:03:08.215 --rc lcov_function_coverage=1 00:03:08.215 --rc genhtml_branch_coverage=1 00:03:08.215 --rc genhtml_function_coverage=1 00:03:08.215 --rc genhtml_legend=1 00:03:08.215 --rc geninfo_all_blocks=1 00:03:08.215 --no-external' 00:03:08.215 14:58:46 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:08.215 --rc lcov_branch_coverage=1 00:03:08.215 --rc lcov_function_coverage=1 00:03:08.215 --rc genhtml_branch_coverage=1 00:03:08.215 --rc genhtml_function_coverage=1 00:03:08.215 --rc genhtml_legend=1 00:03:08.215 --rc geninfo_all_blocks=1 00:03:08.215 --no-external' 00:03:08.215 14:58:46 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:08.474 lcov: LCOV version 1.14 00:03:08.474 14:58:46 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:23.347 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:23.347 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:38.223 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:38.223 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:38.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:38.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:40.127 14:59:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:40.127 14:59:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:40.127 14:59:18 -- common/autotest_common.sh@10 -- # set +x 00:03:40.127 14:59:18 -- spdk/autotest.sh@91 -- # rm -f 00:03:40.127 14:59:18 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:40.693 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:41.272 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:41.272 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:41.272 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:41.272 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:41.272 14:59:19 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:41.272 14:59:19 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:41.272 14:59:19 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:41.272 14:59:19 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:41.272 14:59:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.272 14:59:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:41.272 14:59:19 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:41.272 14:59:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.272 14:59:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.272 14:59:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.272 14:59:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:41.272 14:59:19 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:41.272 14:59:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:41.272 14:59:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.272 14:59:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.272 14:59:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:03:41.272 14:59:19 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:03:41.272 14:59:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:41.272 14:59:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.272 14:59:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.272 14:59:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:03:41.272 14:59:19 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:03:41.272 14:59:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:41.272 14:59:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.272 14:59:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.272 14:59:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:03:41.272 14:59:19 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:03:41.272 14:59:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:41.272 14:59:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.272 14:59:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.272 14:59:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:03:41.272 14:59:19 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:03:41.272 14:59:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:41.272 14:59:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.272 14:59:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.272 14:59:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:03:41.272 14:59:19 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:03:41.272 14:59:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:41.272 14:59:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.272 14:59:19 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:41.272 14:59:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.272 14:59:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:41.272 14:59:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:41.272 14:59:19 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:41.272 14:59:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:41.530 No valid GPT data, bailing 00:03:41.530 14:59:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:41.530 14:59:19 -- scripts/common.sh@391 -- # pt= 00:03:41.530 14:59:19 -- scripts/common.sh@392 -- # return 1 00:03:41.530 14:59:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:41.530 1+0 records in 00:03:41.530 1+0 records out 00:03:41.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108302 s, 96.8 MB/s 00:03:41.530 14:59:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.530 14:59:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:41.530 14:59:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:41.530 14:59:19 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:41.530 14:59:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:41.530 No valid GPT data, bailing 00:03:41.530 14:59:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:41.530 14:59:19 -- scripts/common.sh@391 -- # pt= 00:03:41.530 14:59:19 -- scripts/common.sh@392 -- # return 1 00:03:41.530 14:59:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:41.530 1+0 records in 00:03:41.530 1+0 records out 00:03:41.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00425114 s, 247 MB/s 00:03:41.530 14:59:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.530 14:59:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:41.530 14:59:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:03:41.530 14:59:19 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:03:41.530 14:59:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:41.530 No valid GPT data, bailing 00:03:41.530 14:59:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:41.530 14:59:19 -- scripts/common.sh@391 -- # pt= 00:03:41.530 14:59:19 -- scripts/common.sh@392 -- # return 1 00:03:41.530 14:59:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:41.530 1+0 records in 00:03:41.530 1+0 records out 00:03:41.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00407599 s, 257 MB/s 00:03:41.530 14:59:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.530 14:59:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:41.530 14:59:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:03:41.530 14:59:19 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:03:41.530 14:59:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:41.788 No valid GPT data, bailing 00:03:41.788 14:59:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:41.788 14:59:19 -- scripts/common.sh@391 -- # pt= 00:03:41.788 14:59:19 -- scripts/common.sh@392 -- # return 1 00:03:41.788 14:59:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:41.788 1+0 records in 00:03:41.788 1+0 records out 00:03:41.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621702 s, 169 MB/s 00:03:41.789 14:59:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.789 14:59:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:41.789 14:59:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:03:41.789 14:59:19 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:03:41.789 14:59:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:41.789 No valid GPT data, bailing 00:03:41.789 14:59:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:41.789 14:59:19 -- scripts/common.sh@391 -- # pt= 00:03:41.789 14:59:19 -- scripts/common.sh@392 -- # return 1 00:03:41.789 14:59:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:41.789 1+0 records in 00:03:41.789 1+0 records out 00:03:41.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00607483 s, 173 MB/s 00:03:41.789 14:59:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.789 14:59:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:41.789 14:59:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:03:41.789 14:59:19 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:03:41.789 14:59:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:41.789 No valid GPT data, bailing 00:03:41.789 14:59:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:41.789 14:59:19 -- scripts/common.sh@391 -- # pt= 00:03:41.789 14:59:19 -- scripts/common.sh@392 -- # return 1 00:03:41.789 14:59:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:41.789 1+0 records in 00:03:41.789 1+0 records out 00:03:41.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0040192 s, 261 MB/s 00:03:41.789 14:59:19 -- spdk/autotest.sh@118 -- # sync 00:03:42.047 14:59:19 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:42.047 14:59:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:42.047 14:59:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:44.590 14:59:22 -- spdk/autotest.sh@124 -- # uname -s 00:03:44.590 14:59:22 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:44.590 14:59:22 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:44.590 14:59:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.590 14:59:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.590 14:59:22 -- common/autotest_common.sh@10 -- # set +x 00:03:44.590 ************************************ 00:03:44.590 START TEST setup.sh 00:03:44.590 ************************************ 00:03:44.590 14:59:22 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:44.590 * Looking for test storage... 00:03:44.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:44.590 14:59:22 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:44.590 14:59:22 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:44.590 14:59:22 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:44.590 14:59:22 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.590 14:59:22 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.590 14:59:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:44.590 ************************************ 00:03:44.590 START TEST acl 00:03:44.590 ************************************ 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:44.590 * Looking for test storage... 00:03:44.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:44.590 14:59:22 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:44.590 14:59:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:44.591 14:59:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:03:44.591 14:59:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:03:44.591 14:59:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:44.591 14:59:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:44.591 14:59:22 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:44.591 14:59:22 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:44.591 14:59:22 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:44.591 14:59:22 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:44.591 14:59:22 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:44.591 14:59:22 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.591 14:59:22 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:45.968 14:59:23 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:45.968 14:59:23 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:45.968 14:59:23 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:45.968 14:59:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.968 14:59:23 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.968 14:59:23 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:46.538 14:59:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:46.538 14:59:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.538 14:59:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.105 Hugepages 00:03:47.105 node hugesize free / total 00:03:47.105 14:59:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:47.105 14:59:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.105 14:59:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.105 00:03:47.105 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:47.105 14:59:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:47.105 14:59:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.105 14:59:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:47.365 14:59:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.624 14:59:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:03:47.624 14:59:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:47.624 14:59:25 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:03:47.624 14:59:25 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:47.624 14:59:25 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:47.624 14:59:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.624 14:59:25 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:47.624 14:59:25 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:47.624 14:59:25 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.624 14:59:25 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.624 14:59:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:47.624 ************************************ 00:03:47.624 START TEST denied 00:03:47.624 ************************************ 00:03:47.624 14:59:25 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:47.624 14:59:25 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:47.624 14:59:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:47.624 14:59:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:47.624 14:59:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.624 14:59:25 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:49.000 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:49.000 14:59:26 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:49.000 14:59:26 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:49.000 14:59:26 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:49.000 14:59:26 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:49.000 14:59:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:49.000 14:59:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:49.000 14:59:26 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:49.000 14:59:26 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:49.000 14:59:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.000 14:59:26 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.561 00:03:55.561 real 0m7.552s 00:03:55.561 user 0m0.960s 00:03:55.561 sys 0m1.690s 00:03:55.561 14:59:33 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.561 14:59:33 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:55.561 ************************************ 00:03:55.561 END TEST denied 00:03:55.561 ************************************ 00:03:55.561 14:59:33 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:55.561 14:59:33 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:55.561 14:59:33 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.561 14:59:33 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.561 14:59:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:55.561 ************************************ 00:03:55.561 START TEST allowed 00:03:55.561 ************************************ 00:03:55.561 14:59:33 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:55.561 14:59:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:55.562 14:59:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:55.562 14:59:33 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:55.562 14:59:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.562 14:59:33 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:56.502 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.502 14:59:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.881 00:03:57.881 real 0m2.690s 00:03:57.881 user 0m1.027s 00:03:57.881 sys 0m1.672s 00:03:57.881 14:59:35 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.881 14:59:35 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:57.881 ************************************ 00:03:57.881 END TEST allowed 00:03:57.881 ************************************ 00:03:57.881 14:59:35 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:57.881 ************************************ 00:03:57.881 END TEST acl 00:03:57.881 ************************************ 00:03:57.881 00:03:57.881 real 0m13.341s 00:03:57.881 user 0m3.353s 00:03:57.881 sys 0m5.140s 00:03:57.881 14:59:35 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.881 14:59:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:57.881 14:59:35 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:57.881 14:59:35 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:57.881 14:59:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.881 14:59:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.881 14:59:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:57.881 ************************************ 00:03:57.881 START TEST hugepages 00:03:57.881 ************************************ 00:03:57.881 14:59:35 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:58.142 * Looking for test storage... 00:03:58.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5825344 kB' 'MemAvailable: 7409160 kB' 'Buffers: 2436 kB' 'Cached: 1797092 kB' 'SwapCached: 0 kB' 'Active: 448532 kB' 'Inactive: 1457028 kB' 'Active(anon): 116548 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 107732 kB' 'Mapped: 48632 kB' 'Shmem: 10512 kB' 'KReclaimable: 63488 kB' 'Slab: 138848 kB' 'SReclaimable: 63488 kB' 'SUnreclaim: 75360 kB' 'KernelStack: 6392 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 322068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.142 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.143 14:59:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:58.144 14:59:36 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:58.144 14:59:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.144 14:59:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.144 14:59:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.144 ************************************ 00:03:58.144 START TEST default_setup 00:03:58.144 ************************************ 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.144 14:59:36 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:58.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.654 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.654 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.654 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.654 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7946928 kB' 'MemAvailable: 9530460 kB' 'Buffers: 2436 kB' 'Cached: 1797080 kB' 'SwapCached: 0 kB' 'Active: 462428 kB' 'Inactive: 1457040 kB' 'Active(anon): 130444 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 121816 kB' 'Mapped: 48696 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 137996 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75100 kB' 'KernelStack: 6368 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 340936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.654 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.655 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7946928 kB' 'MemAvailable: 9530460 kB' 'Buffers: 2436 kB' 'Cached: 1797080 kB' 'SwapCached: 0 kB' 'Active: 462440 kB' 'Inactive: 1457040 kB' 'Active(anon): 130456 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 121592 kB' 'Mapped: 48572 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 137996 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75100 kB' 'KernelStack: 6368 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 340936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.656 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7947220 kB' 'MemAvailable: 9530752 kB' 'Buffers: 2436 kB' 'Cached: 1797080 kB' 'SwapCached: 0 kB' 'Active: 462468 kB' 'Inactive: 1457040 kB' 'Active(anon): 130484 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 121616 kB' 'Mapped: 48572 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 137996 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75100 kB' 'KernelStack: 6368 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 340936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:03:59.657 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.658 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.659 nr_hugepages=1024 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.659 resv_hugepages=0 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.659 surplus_hugepages=0 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.659 anon_hugepages=0 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.659 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7947220 kB' 'MemAvailable: 9530752 kB' 'Buffers: 2436 kB' 'Cached: 1797080 kB' 'SwapCached: 0 kB' 'Active: 462496 kB' 'Inactive: 1457040 kB' 'Active(anon): 130512 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 121692 kB' 'Mapped: 48572 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 137996 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75100 kB' 'KernelStack: 6384 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 340936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.922 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.924 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7946968 kB' 'MemUsed: 4295012 kB' 'SwapCached: 0 kB' 'Active: 462180 kB' 'Inactive: 1457040 kB' 'Active(anon): 130196 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 1799516 kB' 'Mapped: 48572 kB' 'AnonPages: 121592 kB' 'Shmem: 10472 kB' 'KernelStack: 6368 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62896 kB' 'Slab: 137984 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.926 node0=1024 expecting 1024 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:59.926 14:59:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:59.926 00:03:59.926 real 0m1.670s 00:03:59.926 user 0m0.676s 00:03:59.927 sys 0m0.957s 00:03:59.927 14:59:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.927 ************************************ 00:03:59.927 END TEST default_setup 00:03:59.927 ************************************ 00:03:59.927 14:59:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:59.927 14:59:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:59.927 14:59:37 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:59.927 14:59:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.927 14:59:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.927 14:59:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.927 ************************************ 00:03:59.927 START TEST per_node_1G_alloc 00:03:59.927 ************************************ 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.927 14:59:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.535 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.535 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.535 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.535 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.535 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8988728 kB' 'MemAvailable: 10572280 kB' 'Buffers: 2436 kB' 'Cached: 1797080 kB' 'SwapCached: 0 kB' 'Active: 462676 kB' 'Inactive: 1457060 kB' 'Active(anon): 130692 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121844 kB' 'Mapped: 48568 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 137988 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75092 kB' 'KernelStack: 6420 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 340936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.535 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.803 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8988988 kB' 'MemAvailable: 10572540 kB' 'Buffers: 2436 kB' 'Cached: 1797080 kB' 'SwapCached: 0 kB' 'Active: 462496 kB' 'Inactive: 1457060 kB' 'Active(anon): 130512 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121616 kB' 'Mapped: 48472 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 137992 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75096 kB' 'KernelStack: 6412 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 340936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.804 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.805 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8988484 kB' 'MemAvailable: 10572036 kB' 'Buffers: 2436 kB' 'Cached: 1797080 kB' 'SwapCached: 0 kB' 'Active: 462660 kB' 'Inactive: 1457060 kB' 'Active(anon): 130676 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121780 kB' 'Mapped: 48472 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 137988 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75092 kB' 'KernelStack: 6396 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 340936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.806 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.807 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.808 nr_hugepages=512 00:04:00.808 resv_hugepages=0 00:04:00.808 surplus_hugepages=0 00:04:00.808 anon_hugepages=0 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8988484 kB' 'MemAvailable: 10572036 kB' 'Buffers: 2436 kB' 'Cached: 1797080 kB' 'SwapCached: 0 kB' 'Active: 462492 kB' 'Inactive: 1457060 kB' 'Active(anon): 130508 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121872 kB' 'Mapped: 48472 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 137988 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75092 kB' 'KernelStack: 6396 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 340936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.808 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.809 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8988484 kB' 'MemUsed: 3253496 kB' 'SwapCached: 0 kB' 'Active: 462464 kB' 'Inactive: 1457060 kB' 'Active(anon): 130480 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 1799516 kB' 'Mapped: 48472 kB' 'AnonPages: 121832 kB' 'Shmem: 10472 kB' 'KernelStack: 6380 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62896 kB' 'Slab: 137988 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.810 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.811 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.812 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.812 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.812 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.812 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:00.812 node0=512 expecting 512 00:04:00.812 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:00.812 00:04:00.812 real 0m0.932s 00:04:00.812 user 0m0.404s 00:04:00.812 sys 0m0.557s 00:04:00.812 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.812 14:59:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.812 ************************************ 00:04:00.812 END TEST per_node_1G_alloc 00:04:00.812 ************************************ 00:04:00.812 14:59:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:00.812 14:59:38 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:00.812 14:59:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.812 14:59:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.812 14:59:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.812 ************************************ 00:04:00.812 START TEST even_2G_alloc 00:04:00.812 ************************************ 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.812 14:59:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.380 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.643 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.643 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.643 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.643 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7950308 kB' 'MemAvailable: 9533864 kB' 'Buffers: 2436 kB' 'Cached: 1797084 kB' 'SwapCached: 0 kB' 'Active: 462764 kB' 'Inactive: 1457064 kB' 'Active(anon): 130780 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122224 kB' 'Mapped: 48524 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 137984 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75088 kB' 'KernelStack: 6408 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 340936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.643 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7950292 kB' 'MemAvailable: 9533852 kB' 'Buffers: 2436 kB' 'Cached: 1797088 kB' 'SwapCached: 0 kB' 'Active: 462500 kB' 'Inactive: 1457068 kB' 'Active(anon): 130516 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121660 kB' 'Mapped: 48572 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 138028 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75132 kB' 'KernelStack: 6368 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 340936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.644 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7950292 kB' 'MemAvailable: 9533852 kB' 'Buffers: 2436 kB' 'Cached: 1797088 kB' 'SwapCached: 0 kB' 'Active: 462508 kB' 'Inactive: 1457068 kB' 'Active(anon): 130524 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121660 kB' 'Mapped: 48572 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 138028 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75132 kB' 'KernelStack: 6368 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 340936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.646 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:01.648 nr_hugepages=1024 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.648 resv_hugepages=0 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.648 surplus_hugepages=0 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.648 anon_hugepages=0 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7950292 kB' 'MemAvailable: 9533852 kB' 'Buffers: 2436 kB' 'Cached: 1797088 kB' 'SwapCached: 0 kB' 'Active: 462440 kB' 'Inactive: 1457068 kB' 'Active(anon): 130456 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121852 kB' 'Mapped: 48572 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 138016 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75120 kB' 'KernelStack: 6368 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 340936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.910 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7950292 kB' 'MemUsed: 4291688 kB' 'SwapCached: 0 kB' 'Active: 462328 kB' 'Inactive: 1457068 kB' 'Active(anon): 130344 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1799524 kB' 'Mapped: 48572 kB' 'AnonPages: 121496 kB' 'Shmem: 10472 kB' 'KernelStack: 6384 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62896 kB' 'Slab: 138004 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.911 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.912 node0=1024 expecting 1024 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:01.912 ************************************ 00:04:01.912 END TEST even_2G_alloc 00:04:01.912 ************************************ 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:01.912 00:04:01.912 real 0m0.889s 00:04:01.912 user 0m0.360s 00:04:01.912 sys 0m0.570s 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.912 14:59:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.912 14:59:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:01.912 14:59:39 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:01.912 14:59:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.912 14:59:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.912 14:59:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.912 ************************************ 00:04:01.912 START TEST odd_alloc 00:04:01.912 ************************************ 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.912 14:59:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.478 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.478 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.478 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.478 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7942016 kB' 'MemAvailable: 9525576 kB' 'Buffers: 2436 kB' 'Cached: 1797088 kB' 'SwapCached: 0 kB' 'Active: 462856 kB' 'Inactive: 1457068 kB' 'Active(anon): 130872 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121712 kB' 'Mapped: 48700 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 137964 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75068 kB' 'KernelStack: 6400 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 340936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7948920 kB' 'MemAvailable: 9532480 kB' 'Buffers: 2436 kB' 'Cached: 1797088 kB' 'SwapCached: 0 kB' 'Active: 462792 kB' 'Inactive: 1457068 kB' 'Active(anon): 130808 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121784 kB' 'Mapped: 48700 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 137956 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75060 kB' 'KernelStack: 6432 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 340736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.791 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.792 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7948920 kB' 'MemAvailable: 9532476 kB' 'Buffers: 2436 kB' 'Cached: 1797084 kB' 'SwapCached: 0 kB' 'Active: 462640 kB' 'Inactive: 1457064 kB' 'Active(anon): 130656 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121828 kB' 'Mapped: 48572 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 137924 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75028 kB' 'KernelStack: 6384 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 340936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.793 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.794 nr_hugepages=1025 00:04:02.794 resv_hugepages=0 00:04:02.794 surplus_hugepages=0 00:04:02.794 anon_hugepages=0 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.794 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7948920 kB' 'MemAvailable: 9532476 kB' 'Buffers: 2436 kB' 'Cached: 1797084 kB' 'SwapCached: 0 kB' 'Active: 462272 kB' 'Inactive: 1457064 kB' 'Active(anon): 130288 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121676 kB' 'Mapped: 48572 kB' 'Shmem: 10472 kB' 'KReclaimable: 62896 kB' 'Slab: 137904 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75008 kB' 'KernelStack: 6368 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 340936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.795 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7948920 kB' 'MemUsed: 4293060 kB' 'SwapCached: 0 kB' 'Active: 462268 kB' 'Inactive: 1457064 kB' 'Active(anon): 130284 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1799520 kB' 'Mapped: 48572 kB' 'AnonPages: 121676 kB' 'Shmem: 10472 kB' 'KernelStack: 6368 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62896 kB' 'Slab: 137900 kB' 'SReclaimable: 62896 kB' 'SUnreclaim: 75004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.796 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.797 node0=1025 expecting 1025 00:04:02.797 ************************************ 00:04:02.797 END TEST odd_alloc 00:04:02.797 ************************************ 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:02.797 00:04:02.797 real 0m0.941s 00:04:02.797 user 0m0.407s 00:04:02.797 sys 0m0.568s 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.797 14:59:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.797 14:59:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:02.797 14:59:40 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:02.797 14:59:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.797 14:59:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.797 14:59:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.797 ************************************ 00:04:02.797 START TEST custom_alloc 00:04:02.797 ************************************ 00:04:02.797 14:59:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:02.797 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:02.797 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:02.797 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:02.797 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.798 14:59:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.366 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.628 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.628 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.628 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.628 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.628 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8995160 kB' 'MemAvailable: 10578716 kB' 'Buffers: 2436 kB' 'Cached: 1797084 kB' 'SwapCached: 0 kB' 'Active: 459420 kB' 'Inactive: 1457064 kB' 'Active(anon): 127436 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118544 kB' 'Mapped: 48140 kB' 'Shmem: 10472 kB' 'KReclaimable: 62892 kB' 'Slab: 137800 kB' 'SReclaimable: 62892 kB' 'SUnreclaim: 74908 kB' 'KernelStack: 6336 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 327156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.629 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8995420 kB' 'MemAvailable: 10578976 kB' 'Buffers: 2436 kB' 'Cached: 1797084 kB' 'SwapCached: 0 kB' 'Active: 459160 kB' 'Inactive: 1457064 kB' 'Active(anon): 127176 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118600 kB' 'Mapped: 48024 kB' 'Shmem: 10472 kB' 'KReclaimable: 62892 kB' 'Slab: 137796 kB' 'SReclaimable: 62892 kB' 'SUnreclaim: 74904 kB' 'KernelStack: 6320 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 327156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.630 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.631 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8995420 kB' 'MemAvailable: 10578972 kB' 'Buffers: 2436 kB' 'Cached: 1797080 kB' 'SwapCached: 0 kB' 'Active: 459116 kB' 'Inactive: 1457060 kB' 'Active(anon): 127132 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118596 kB' 'Mapped: 48024 kB' 'Shmem: 10472 kB' 'KReclaimable: 62892 kB' 'Slab: 137792 kB' 'SReclaimable: 62892 kB' 'SUnreclaim: 74900 kB' 'KernelStack: 6288 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 327156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54920 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.632 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.633 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:03.634 nr_hugepages=512 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.634 resv_hugepages=0 00:04:03.634 surplus_hugepages=0 00:04:03.634 anon_hugepages=0 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8995420 kB' 'MemAvailable: 10578976 kB' 'Buffers: 2436 kB' 'Cached: 1797084 kB' 'SwapCached: 0 kB' 'Active: 459076 kB' 'Inactive: 1457064 kB' 'Active(anon): 127092 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118536 kB' 'Mapped: 48024 kB' 'Shmem: 10472 kB' 'KReclaimable: 62892 kB' 'Slab: 137788 kB' 'SReclaimable: 62892 kB' 'SUnreclaim: 74896 kB' 'KernelStack: 6304 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 327156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54920 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.634 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.635 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.896 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.896 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.896 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.896 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.896 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.896 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.896 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.896 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.896 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8995420 kB' 'MemUsed: 3246560 kB' 'SwapCached: 0 kB' 'Active: 459084 kB' 'Inactive: 1457064 kB' 'Active(anon): 127100 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1799520 kB' 'Mapped: 48024 kB' 'AnonPages: 118548 kB' 'Shmem: 10472 kB' 'KernelStack: 6304 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62892 kB' 'Slab: 137788 kB' 'SReclaimable: 62892 kB' 'SUnreclaim: 74896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.897 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.898 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.899 node0=512 expecting 512 00:04:03.899 ************************************ 00:04:03.899 END TEST custom_alloc 00:04:03.899 ************************************ 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:03.899 00:04:03.899 real 0m0.935s 00:04:03.899 user 0m0.436s 00:04:03.899 sys 0m0.529s 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.899 14:59:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.899 14:59:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:03.899 14:59:41 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:03.899 14:59:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.899 14:59:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.899 14:59:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.899 ************************************ 00:04:03.899 START TEST no_shrink_alloc 00:04:03.899 ************************************ 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.899 14:59:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.466 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.466 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.466 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.466 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.466 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.729 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:04.729 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.729 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.729 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.729 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.729 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.729 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.729 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.729 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.729 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.729 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7946200 kB' 'MemAvailable: 9529760 kB' 'Buffers: 2436 kB' 'Cached: 1797088 kB' 'SwapCached: 0 kB' 'Active: 459312 kB' 'Inactive: 1457068 kB' 'Active(anon): 127328 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118476 kB' 'Mapped: 47836 kB' 'Shmem: 10472 kB' 'KReclaimable: 62892 kB' 'Slab: 137744 kB' 'SReclaimable: 62892 kB' 'SUnreclaim: 74852 kB' 'KernelStack: 6320 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 327284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7946200 kB' 'MemAvailable: 9529760 kB' 'Buffers: 2436 kB' 'Cached: 1797088 kB' 'SwapCached: 0 kB' 'Active: 459136 kB' 'Inactive: 1457068 kB' 'Active(anon): 127152 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118592 kB' 'Mapped: 47836 kB' 'Shmem: 10472 kB' 'KReclaimable: 62892 kB' 'Slab: 137744 kB' 'SReclaimable: 62892 kB' 'SUnreclaim: 74852 kB' 'KernelStack: 6320 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 327284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7946812 kB' 'MemAvailable: 9530372 kB' 'Buffers: 2436 kB' 'Cached: 1797088 kB' 'SwapCached: 0 kB' 'Active: 459436 kB' 'Inactive: 1457068 kB' 'Active(anon): 127452 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118700 kB' 'Mapped: 48876 kB' 'Shmem: 10472 kB' 'KReclaimable: 62892 kB' 'Slab: 137744 kB' 'SReclaimable: 62892 kB' 'SUnreclaim: 74852 kB' 'KernelStack: 6352 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 329848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.735 nr_hugepages=1024 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.735 resv_hugepages=0 00:04:04.735 surplus_hugepages=0 00:04:04.735 anon_hugepages=0 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7946812 kB' 'MemAvailable: 9530372 kB' 'Buffers: 2436 kB' 'Cached: 1797088 kB' 'SwapCached: 0 kB' 'Active: 459616 kB' 'Inactive: 1457068 kB' 'Active(anon): 127632 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118904 kB' 'Mapped: 47836 kB' 'Shmem: 10472 kB' 'KReclaimable: 62892 kB' 'Slab: 137744 kB' 'SReclaimable: 62892 kB' 'SUnreclaim: 74852 kB' 'KernelStack: 6336 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 327284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54872 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7946560 kB' 'MemUsed: 4295420 kB' 'SwapCached: 0 kB' 'Active: 459340 kB' 'Inactive: 1457068 kB' 'Active(anon): 127356 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1799524 kB' 'Mapped: 47836 kB' 'AnonPages: 118480 kB' 'Shmem: 10472 kB' 'KernelStack: 6288 kB' 'PageTables: 3692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62892 kB' 'Slab: 137744 kB' 'SReclaimable: 62892 kB' 'SUnreclaim: 74852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.738 node0=1024 expecting 1024 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.738 14:59:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.568 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.568 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.568 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.568 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.568 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.568 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7941900 kB' 'MemAvailable: 9525460 kB' 'Buffers: 2436 kB' 'Cached: 1797088 kB' 'SwapCached: 0 kB' 'Active: 459520 kB' 'Inactive: 1457068 kB' 'Active(anon): 127536 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118636 kB' 'Mapped: 47968 kB' 'Shmem: 10472 kB' 'KReclaimable: 62892 kB' 'Slab: 137688 kB' 'SReclaimable: 62892 kB' 'SUnreclaim: 74796 kB' 'KernelStack: 6280 kB' 'PageTables: 3748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 327284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.569 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7942160 kB' 'MemAvailable: 9525720 kB' 'Buffers: 2436 kB' 'Cached: 1797088 kB' 'SwapCached: 0 kB' 'Active: 459192 kB' 'Inactive: 1457068 kB' 'Active(anon): 127208 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118416 kB' 'Mapped: 48224 kB' 'Shmem: 10472 kB' 'KReclaimable: 62892 kB' 'Slab: 137688 kB' 'SReclaimable: 62892 kB' 'SUnreclaim: 74796 kB' 'KernelStack: 6296 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 326916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.570 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7942348 kB' 'MemAvailable: 9525908 kB' 'Buffers: 2436 kB' 'Cached: 1797088 kB' 'SwapCached: 0 kB' 'Active: 459064 kB' 'Inactive: 1457068 kB' 'Active(anon): 127080 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 118504 kB' 'Mapped: 47836 kB' 'Shmem: 10472 kB' 'KReclaimable: 62892 kB' 'Slab: 137724 kB' 'SReclaimable: 62892 kB' 'SUnreclaim: 74832 kB' 'KernelStack: 6288 kB' 'PageTables: 3692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 327284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.574 nr_hugepages=1024 00:04:05.574 resv_hugepages=0 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.574 surplus_hugepages=0 00:04:05.574 anon_hugepages=0 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7942348 kB' 'MemAvailable: 9525908 kB' 'Buffers: 2436 kB' 'Cached: 1797088 kB' 'SwapCached: 0 kB' 'Active: 459060 kB' 'Inactive: 1457068 kB' 'Active(anon): 127076 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 118504 kB' 'Mapped: 47836 kB' 'Shmem: 10472 kB' 'KReclaimable: 62892 kB' 'Slab: 137724 kB' 'SReclaimable: 62892 kB' 'SUnreclaim: 74832 kB' 'KernelStack: 6288 kB' 'PageTables: 3692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 327284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7942348 kB' 'MemUsed: 4299632 kB' 'SwapCached: 0 kB' 'Active: 459060 kB' 'Inactive: 1457068 kB' 'Active(anon): 127076 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1457068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1799524 kB' 'Mapped: 47836 kB' 'AnonPages: 118504 kB' 'Shmem: 10472 kB' 'KernelStack: 6288 kB' 'PageTables: 3692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62892 kB' 'Slab: 137724 kB' 'SReclaimable: 62892 kB' 'SUnreclaim: 74832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.577 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.836 node0=1024 expecting 1024 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.836 00:04:05.836 real 0m1.851s 00:04:05.836 user 0m0.791s 00:04:05.836 sys 0m1.132s 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.836 14:59:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:05.836 ************************************ 00:04:05.836 END TEST no_shrink_alloc 00:04:05.836 ************************************ 00:04:05.836 14:59:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:05.836 14:59:43 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:05.836 14:59:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:05.836 14:59:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.836 14:59:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.836 14:59:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.836 14:59:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.836 14:59:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.836 14:59:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:05.836 14:59:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:05.836 00:04:05.836 real 0m7.785s 00:04:05.836 user 0m3.266s 00:04:05.836 sys 0m4.691s 00:04:05.836 14:59:43 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.836 14:59:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.836 ************************************ 00:04:05.836 END TEST hugepages 00:04:05.836 ************************************ 00:04:05.836 14:59:43 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:05.836 14:59:43 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:05.836 14:59:43 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.836 14:59:43 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.836 14:59:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.836 ************************************ 00:04:05.836 START TEST driver 00:04:05.836 ************************************ 00:04:05.836 14:59:43 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:05.836 * Looking for test storage... 00:04:05.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:05.836 14:59:43 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:05.836 14:59:43 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.836 14:59:43 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:12.427 14:59:50 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:12.427 14:59:50 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.427 14:59:50 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.427 14:59:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:12.427 ************************************ 00:04:12.427 START TEST guess_driver 00:04:12.427 ************************************ 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:12.427 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:12.427 Looking for driver=uio_pci_generic 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.427 14:59:50 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:12.696 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:12.696 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:12.696 14:59:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.263 14:59:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.263 14:59:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:13.263 14:59:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.263 14:59:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.263 14:59:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:13.263 14:59:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.263 14:59:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.263 14:59:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:13.263 14:59:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.521 14:59:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.521 14:59:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:13.521 14:59:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.521 14:59:51 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:13.521 14:59:51 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:13.521 14:59:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.521 14:59:51 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.112 00:04:20.112 real 0m7.581s 00:04:20.112 user 0m0.851s 00:04:20.112 sys 0m1.863s 00:04:20.112 14:59:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.112 14:59:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:20.112 ************************************ 00:04:20.112 END TEST guess_driver 00:04:20.112 ************************************ 00:04:20.112 14:59:57 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:20.112 00:04:20.112 real 0m13.850s 00:04:20.112 user 0m1.312s 00:04:20.112 sys 0m2.846s 00:04:20.112 14:59:57 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.112 14:59:57 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:20.112 ************************************ 00:04:20.112 END TEST driver 00:04:20.112 ************************************ 00:04:20.112 14:59:57 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:20.112 14:59:57 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:20.112 14:59:57 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.112 14:59:57 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.112 14:59:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:20.112 ************************************ 00:04:20.112 START TEST devices 00:04:20.112 ************************************ 00:04:20.112 14:59:57 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:20.112 * Looking for test storage... 00:04:20.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:20.112 14:59:57 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:20.112 14:59:57 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:20.112 14:59:57 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.112 14:59:57 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:21.491 14:59:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:21.491 No valid GPT data, bailing 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:21.491 14:59:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:21.491 14:59:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:21.491 14:59:59 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:21.491 No valid GPT data, bailing 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:21.491 14:59:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:21.491 14:59:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:21.491 14:59:59 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:21.491 No valid GPT data, bailing 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:21.491 14:59:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:21.491 14:59:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:21.491 14:59:59 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:04:21.491 No valid GPT data, bailing 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:04:21.491 14:59:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:04:21.491 14:59:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:04:21.491 14:59:59 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:21.491 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:04:21.491 14:59:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:04:21.492 14:59:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:04:21.492 No valid GPT data, bailing 00:04:21.492 14:59:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:21.492 14:59:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:21.492 14:59:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:21.492 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:04:21.492 14:59:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:04:21.492 14:59:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:04:21.492 14:59:59 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:21.492 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:21.492 14:59:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:21.492 14:59:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:21.492 14:59:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.492 14:59:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:21.492 14:59:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:21.492 14:59:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:04:21.492 14:59:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:21.492 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:21.492 14:59:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:04:21.492 14:59:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:21.751 No valid GPT data, bailing 00:04:21.751 14:59:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:21.751 14:59:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:21.751 14:59:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:21.751 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:21.751 14:59:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:21.751 14:59:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:21.751 14:59:59 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:04:21.751 14:59:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:21.751 14:59:59 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:21.751 14:59:59 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:21.752 14:59:59 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:21.752 14:59:59 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.752 14:59:59 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.752 14:59:59 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:21.752 ************************************ 00:04:21.752 START TEST nvme_mount 00:04:21.752 ************************************ 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:21.752 14:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:22.696 Creating new GPT entries in memory. 00:04:22.696 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:22.696 other utilities. 00:04:22.696 15:00:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:22.696 15:00:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.696 15:00:00 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:22.696 15:00:00 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:22.696 15:00:00 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:24.068 Creating new GPT entries in memory. 00:04:24.068 The operation has completed successfully. 00:04:24.068 15:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:24.068 15:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.068 15:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59677 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.069 15:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:24.069 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.069 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:24.069 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:24.069 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.069 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.069 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.325 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.325 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.325 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.325 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.325 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.325 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.582 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.582 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.840 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.840 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:24.840 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.840 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.840 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:24.840 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:24.840 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.840 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.840 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.840 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:25.099 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.099 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.099 15:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:25.356 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:25.356 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:25.356 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:25.356 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:25.356 15:00:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.357 15:00:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:25.648 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.648 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:25.648 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:25.648 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.648 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.648 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.905 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.905 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.905 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.905 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.905 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.905 15:00:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.162 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:26.162 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.420 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.420 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:26.420 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.420 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.420 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.420 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.420 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:26.420 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:26.420 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:26.420 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:26.421 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:26.421 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:26.421 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:26.421 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:26.421 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:26.421 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.421 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:26.421 15:00:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.421 15:00:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:26.987 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:26.987 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:26.987 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:26.987 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.987 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:26.987 15:00:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.987 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:26.987 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.246 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.246 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.246 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.246 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.549 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.549 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.549 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.549 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:27.549 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:27.549 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:27.549 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.549 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.549 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.549 15:00:05 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:27.549 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.808 00:04:27.808 real 0m5.995s 00:04:27.808 user 0m1.497s 00:04:27.808 sys 0m2.134s 00:04:27.808 15:00:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.808 15:00:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:27.808 ************************************ 00:04:27.808 END TEST nvme_mount 00:04:27.808 ************************************ 00:04:27.808 15:00:05 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:27.808 15:00:05 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:27.808 15:00:05 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.808 15:00:05 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.808 15:00:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:27.808 ************************************ 00:04:27.808 START TEST dm_mount 00:04:27.808 ************************************ 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:27.808 15:00:05 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:28.741 Creating new GPT entries in memory. 00:04:28.741 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:28.741 other utilities. 00:04:28.741 15:00:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:28.741 15:00:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.741 15:00:06 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:28.741 15:00:06 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:28.741 15:00:06 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:30.119 Creating new GPT entries in memory. 00:04:30.119 The operation has completed successfully. 00:04:30.119 15:00:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:30.119 15:00:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.120 15:00:08 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.120 15:00:08 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.120 15:00:08 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:31.055 The operation has completed successfully. 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60313 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:31.055 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.056 15:00:09 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:31.056 15:00:09 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.056 15:00:09 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.056 15:00:09 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:31.314 15:00:09 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.314 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:31.314 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:31.314 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:31.314 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.314 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:31.314 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.314 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:31.314 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:31.314 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.314 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.314 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:31.314 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.314 15:00:09 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.314 15:00:09 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.573 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.573 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:31.573 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:31.573 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.573 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.573 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.833 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.833 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.833 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.833 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.833 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.833 15:00:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.401 15:00:10 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:32.969 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.969 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:32.969 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:32.969 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.969 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.969 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.969 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.969 15:00:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.260 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.260 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.260 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.260 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.527 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.527 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.786 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.786 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:33.786 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:33.786 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:33.786 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:33.786 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:33.786 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:33.786 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.786 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:33.786 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.786 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:33.786 15:00:11 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:33.786 00:04:33.786 real 0m6.004s 00:04:33.786 user 0m1.066s 00:04:33.786 sys 0m1.418s 00:04:33.786 15:00:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.786 15:00:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:33.786 ************************************ 00:04:33.786 END TEST dm_mount 00:04:33.786 ************************************ 00:04:33.786 15:00:11 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:33.786 15:00:11 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:33.786 15:00:11 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:33.786 15:00:11 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.786 15:00:11 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.786 15:00:11 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:33.786 15:00:11 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.786 15:00:11 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:34.045 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:34.045 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:34.045 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:34.045 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:34.045 15:00:12 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:34.045 15:00:12 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.045 15:00:12 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:34.045 15:00:12 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.045 15:00:12 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:34.045 15:00:12 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:34.045 15:00:12 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:34.045 00:04:34.045 real 0m14.361s 00:04:34.045 user 0m3.579s 00:04:34.045 sys 0m4.625s 00:04:34.045 ************************************ 00:04:34.045 END TEST devices 00:04:34.045 ************************************ 00:04:34.045 15:00:12 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.045 15:00:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:34.045 15:00:12 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:34.045 00:04:34.045 real 0m49.720s 00:04:34.045 user 0m11.642s 00:04:34.045 sys 0m17.572s 00:04:34.045 15:00:12 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.045 15:00:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:34.045 ************************************ 00:04:34.045 END TEST setup.sh 00:04:34.045 ************************************ 00:04:34.304 15:00:12 -- common/autotest_common.sh@1142 -- # return 0 00:04:34.304 15:00:12 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:34.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.130 Hugepages 00:04:35.130 node hugesize free / total 00:04:35.130 node0 1048576kB 0 / 0 00:04:35.390 node0 2048kB 2048 / 2048 00:04:35.390 00:04:35.390 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:35.390 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:35.390 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:35.650 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:35.650 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:35.650 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:35.650 15:00:13 -- spdk/autotest.sh@130 -- # uname -s 00:04:35.650 15:00:13 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:35.650 15:00:13 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:35.650 15:00:13 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:36.219 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.153 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.153 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.153 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.153 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.153 15:00:15 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:38.087 15:00:16 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:38.087 15:00:16 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:38.087 15:00:16 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:38.087 15:00:16 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:38.087 15:00:16 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:38.087 15:00:16 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:38.087 15:00:16 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:38.087 15:00:16 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:38.087 15:00:16 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:38.345 15:00:16 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:04:38.345 15:00:16 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:38.345 15:00:16 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.602 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.860 Waiting for block devices as requested 00:04:38.860 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:39.119 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:39.119 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:39.119 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:44.382 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:44.382 15:00:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:44.382 15:00:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:44.382 15:00:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:44.382 15:00:22 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:44.382 15:00:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:44.382 15:00:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:44.382 15:00:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:44.382 15:00:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:44.382 15:00:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:44.382 15:00:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:44.382 15:00:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:44.382 15:00:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:44.382 15:00:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:44.382 15:00:22 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:44.382 15:00:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:44.382 15:00:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:44.382 15:00:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:44.382 15:00:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:44.382 15:00:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:44.382 15:00:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:44.382 15:00:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:44.382 15:00:22 -- common/autotest_common.sh@1557 -- # continue 00:04:44.382 15:00:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:44.382 15:00:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:44.382 15:00:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:44.382 15:00:22 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:44.382 15:00:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:44.382 15:00:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:44.382 15:00:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:44.382 15:00:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:44.382 15:00:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:44.382 15:00:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:44.382 15:00:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:44.382 15:00:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:44.382 15:00:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:44.382 15:00:22 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:44.382 15:00:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:44.382 15:00:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:44.382 15:00:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:44.382 15:00:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:44.382 15:00:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:44.382 15:00:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:44.382 15:00:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:44.382 15:00:22 -- common/autotest_common.sh@1557 -- # continue 00:04:44.382 15:00:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:44.382 15:00:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:44.382 15:00:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:44.382 15:00:22 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:04:44.382 15:00:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:44.382 15:00:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:44.382 15:00:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:44.382 15:00:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:04:44.382 15:00:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:04:44.382 15:00:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:04:44.382 15:00:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:04:44.382 15:00:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:44.382 15:00:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:44.382 15:00:22 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:44.382 15:00:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:44.383 15:00:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:44.383 15:00:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:04:44.383 15:00:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:44.383 15:00:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:44.383 15:00:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:44.383 15:00:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:44.383 15:00:22 -- common/autotest_common.sh@1557 -- # continue 00:04:44.383 15:00:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:44.383 15:00:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:44.383 15:00:22 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:04:44.383 15:00:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:44.383 15:00:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:44.383 15:00:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:44.383 15:00:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:44.383 15:00:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:04:44.383 15:00:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:04:44.383 15:00:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:04:44.383 15:00:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:04:44.383 15:00:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:44.383 15:00:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:44.383 15:00:22 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:44.383 15:00:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:44.383 15:00:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:44.383 15:00:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:04:44.383 15:00:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:44.383 15:00:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:44.383 15:00:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:44.383 15:00:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:44.383 15:00:22 -- common/autotest_common.sh@1557 -- # continue 00:04:44.383 15:00:22 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:44.383 15:00:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:44.383 15:00:22 -- common/autotest_common.sh@10 -- # set +x 00:04:44.643 15:00:22 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:44.643 15:00:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.643 15:00:22 -- common/autotest_common.sh@10 -- # set +x 00:04:44.643 15:00:22 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.217 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.784 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.784 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.784 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.042 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.042 15:00:24 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:46.042 15:00:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:46.042 15:00:24 -- common/autotest_common.sh@10 -- # set +x 00:04:46.042 15:00:24 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:46.042 15:00:24 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:46.042 15:00:24 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:46.042 15:00:24 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:46.042 15:00:24 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:46.042 15:00:24 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:46.042 15:00:24 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:46.042 15:00:24 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:46.042 15:00:24 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:46.042 15:00:24 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:46.042 15:00:24 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:46.301 15:00:24 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:04:46.301 15:00:24 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:46.301 15:00:24 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:46.301 15:00:24 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:46.301 15:00:24 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:46.301 15:00:24 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.301 15:00:24 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:46.301 15:00:24 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:46.301 15:00:24 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:46.301 15:00:24 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.301 15:00:24 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:46.301 15:00:24 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:46.301 15:00:24 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:46.301 15:00:24 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.301 15:00:24 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:46.301 15:00:24 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:46.301 15:00:24 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:46.301 15:00:24 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.301 15:00:24 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:46.301 15:00:24 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:46.301 15:00:24 -- common/autotest_common.sh@1593 -- # return 0 00:04:46.301 15:00:24 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:46.301 15:00:24 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:46.301 15:00:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:46.301 15:00:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:46.301 15:00:24 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:46.301 15:00:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:46.301 15:00:24 -- common/autotest_common.sh@10 -- # set +x 00:04:46.301 15:00:24 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:46.301 15:00:24 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:46.301 15:00:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.301 15:00:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.301 15:00:24 -- common/autotest_common.sh@10 -- # set +x 00:04:46.301 ************************************ 00:04:46.301 START TEST env 00:04:46.301 ************************************ 00:04:46.301 15:00:24 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:46.301 * Looking for test storage... 00:04:46.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:46.301 15:00:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.301 15:00:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.301 15:00:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.301 15:00:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.301 ************************************ 00:04:46.301 START TEST env_memory 00:04:46.301 ************************************ 00:04:46.301 15:00:24 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.559 00:04:46.559 00:04:46.559 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.559 http://cunit.sourceforge.net/ 00:04:46.559 00:04:46.559 00:04:46.559 Suite: memory 00:04:46.559 Test: alloc and free memory map ...[2024-07-15 15:00:24.467292] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:46.559 passed 00:04:46.559 Test: mem map translation ...[2024-07-15 15:00:24.515959] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:46.559 [2024-07-15 15:00:24.516088] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:46.559 [2024-07-15 15:00:24.516216] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:46.559 [2024-07-15 15:00:24.516299] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:46.559 passed 00:04:46.559 Test: mem map registration ...[2024-07-15 15:00:24.585571] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:46.559 [2024-07-15 15:00:24.585683] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:46.559 passed 00:04:46.818 Test: mem map adjacent registrations ...passed 00:04:46.818 00:04:46.818 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.818 suites 1 1 n/a 0 0 00:04:46.818 tests 4 4 4 0 0 00:04:46.818 asserts 152 152 152 0 n/a 00:04:46.818 00:04:46.818 Elapsed time = 0.252 seconds 00:04:46.818 00:04:46.818 real 0m0.299s 00:04:46.818 user 0m0.266s 00:04:46.818 sys 0m0.023s 00:04:46.818 ************************************ 00:04:46.818 END TEST env_memory 00:04:46.818 ************************************ 00:04:46.818 15:00:24 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.818 15:00:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:46.818 15:00:24 env -- common/autotest_common.sh@1142 -- # return 0 00:04:46.818 15:00:24 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:46.818 15:00:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.818 15:00:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.818 15:00:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.818 ************************************ 00:04:46.818 START TEST env_vtophys 00:04:46.818 ************************************ 00:04:46.818 15:00:24 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:46.818 EAL: lib.eal log level changed from notice to debug 00:04:46.818 EAL: Detected lcore 0 as core 0 on socket 0 00:04:46.818 EAL: Detected lcore 1 as core 0 on socket 0 00:04:46.818 EAL: Detected lcore 2 as core 0 on socket 0 00:04:46.818 EAL: Detected lcore 3 as core 0 on socket 0 00:04:46.818 EAL: Detected lcore 4 as core 0 on socket 0 00:04:46.818 EAL: Detected lcore 5 as core 0 on socket 0 00:04:46.818 EAL: Detected lcore 6 as core 0 on socket 0 00:04:46.818 EAL: Detected lcore 7 as core 0 on socket 0 00:04:46.818 EAL: Detected lcore 8 as core 0 on socket 0 00:04:46.818 EAL: Detected lcore 9 as core 0 on socket 0 00:04:46.818 EAL: Maximum logical cores by configuration: 128 00:04:46.818 EAL: Detected CPU lcores: 10 00:04:46.818 EAL: Detected NUMA nodes: 1 00:04:46.818 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:46.818 EAL: Detected shared linkage of DPDK 00:04:46.818 EAL: No shared files mode enabled, IPC will be disabled 00:04:46.818 EAL: Selected IOVA mode 'PA' 00:04:46.818 EAL: Probing VFIO support... 00:04:46.818 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:46.818 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:46.818 EAL: Ask a virtual area of 0x2e000 bytes 00:04:46.818 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:46.818 EAL: Setting up physically contiguous memory... 00:04:46.818 EAL: Setting maximum number of open files to 524288 00:04:46.818 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:46.818 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:46.818 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.818 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:46.818 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.818 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.818 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:46.818 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:46.818 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.818 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:46.818 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.818 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.818 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:46.818 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:46.818 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.818 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:46.818 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.818 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.818 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:46.818 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:46.818 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.818 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:46.818 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.818 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.818 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:46.818 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:46.818 EAL: Hugepages will be freed exactly as allocated. 00:04:46.818 EAL: No shared files mode enabled, IPC is disabled 00:04:46.818 EAL: No shared files mode enabled, IPC is disabled 00:04:47.076 EAL: TSC frequency is ~2290000 KHz 00:04:47.076 EAL: Main lcore 0 is ready (tid=7ffbcb109a40;cpuset=[0]) 00:04:47.076 EAL: Trying to obtain current memory policy. 00:04:47.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.076 EAL: Restoring previous memory policy: 0 00:04:47.076 EAL: request: mp_malloc_sync 00:04:47.076 EAL: No shared files mode enabled, IPC is disabled 00:04:47.076 EAL: Heap on socket 0 was expanded by 2MB 00:04:47.076 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:47.076 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:47.076 EAL: Mem event callback 'spdk:(nil)' registered 00:04:47.076 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:47.076 00:04:47.076 00:04:47.076 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.076 http://cunit.sourceforge.net/ 00:04:47.076 00:04:47.076 00:04:47.076 Suite: components_suite 00:04:47.334 Test: vtophys_malloc_test ...passed 00:04:47.334 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:47.334 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.334 EAL: Restoring previous memory policy: 4 00:04:47.334 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.334 EAL: request: mp_malloc_sync 00:04:47.335 EAL: No shared files mode enabled, IPC is disabled 00:04:47.335 EAL: Heap on socket 0 was expanded by 4MB 00:04:47.335 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.335 EAL: request: mp_malloc_sync 00:04:47.335 EAL: No shared files mode enabled, IPC is disabled 00:04:47.335 EAL: Heap on socket 0 was shrunk by 4MB 00:04:47.335 EAL: Trying to obtain current memory policy. 00:04:47.335 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.335 EAL: Restoring previous memory policy: 4 00:04:47.335 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.335 EAL: request: mp_malloc_sync 00:04:47.335 EAL: No shared files mode enabled, IPC is disabled 00:04:47.335 EAL: Heap on socket 0 was expanded by 6MB 00:04:47.335 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.335 EAL: request: mp_malloc_sync 00:04:47.335 EAL: No shared files mode enabled, IPC is disabled 00:04:47.335 EAL: Heap on socket 0 was shrunk by 6MB 00:04:47.335 EAL: Trying to obtain current memory policy. 00:04:47.335 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.335 EAL: Restoring previous memory policy: 4 00:04:47.335 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.335 EAL: request: mp_malloc_sync 00:04:47.335 EAL: No shared files mode enabled, IPC is disabled 00:04:47.335 EAL: Heap on socket 0 was expanded by 10MB 00:04:47.335 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.335 EAL: request: mp_malloc_sync 00:04:47.335 EAL: No shared files mode enabled, IPC is disabled 00:04:47.335 EAL: Heap on socket 0 was shrunk by 10MB 00:04:47.335 EAL: Trying to obtain current memory policy. 00:04:47.335 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.335 EAL: Restoring previous memory policy: 4 00:04:47.335 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.335 EAL: request: mp_malloc_sync 00:04:47.335 EAL: No shared files mode enabled, IPC is disabled 00:04:47.335 EAL: Heap on socket 0 was expanded by 18MB 00:04:47.335 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.335 EAL: request: mp_malloc_sync 00:04:47.335 EAL: No shared files mode enabled, IPC is disabled 00:04:47.335 EAL: Heap on socket 0 was shrunk by 18MB 00:04:47.594 EAL: Trying to obtain current memory policy. 00:04:47.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.594 EAL: Restoring previous memory policy: 4 00:04:47.594 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.594 EAL: request: mp_malloc_sync 00:04:47.594 EAL: No shared files mode enabled, IPC is disabled 00:04:47.594 EAL: Heap on socket 0 was expanded by 34MB 00:04:47.594 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.594 EAL: request: mp_malloc_sync 00:04:47.594 EAL: No shared files mode enabled, IPC is disabled 00:04:47.594 EAL: Heap on socket 0 was shrunk by 34MB 00:04:47.594 EAL: Trying to obtain current memory policy. 00:04:47.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.594 EAL: Restoring previous memory policy: 4 00:04:47.594 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.594 EAL: request: mp_malloc_sync 00:04:47.594 EAL: No shared files mode enabled, IPC is disabled 00:04:47.594 EAL: Heap on socket 0 was expanded by 66MB 00:04:47.853 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.853 EAL: request: mp_malloc_sync 00:04:47.853 EAL: No shared files mode enabled, IPC is disabled 00:04:47.853 EAL: Heap on socket 0 was shrunk by 66MB 00:04:47.853 EAL: Trying to obtain current memory policy. 00:04:47.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.853 EAL: Restoring previous memory policy: 4 00:04:47.853 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.853 EAL: request: mp_malloc_sync 00:04:47.853 EAL: No shared files mode enabled, IPC is disabled 00:04:47.853 EAL: Heap on socket 0 was expanded by 130MB 00:04:48.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.112 EAL: request: mp_malloc_sync 00:04:48.112 EAL: No shared files mode enabled, IPC is disabled 00:04:48.112 EAL: Heap on socket 0 was shrunk by 130MB 00:04:48.369 EAL: Trying to obtain current memory policy. 00:04:48.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.369 EAL: Restoring previous memory policy: 4 00:04:48.369 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.369 EAL: request: mp_malloc_sync 00:04:48.369 EAL: No shared files mode enabled, IPC is disabled 00:04:48.369 EAL: Heap on socket 0 was expanded by 258MB 00:04:48.937 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.937 EAL: request: mp_malloc_sync 00:04:48.937 EAL: No shared files mode enabled, IPC is disabled 00:04:48.937 EAL: Heap on socket 0 was shrunk by 258MB 00:04:49.503 EAL: Trying to obtain current memory policy. 00:04:49.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.503 EAL: Restoring previous memory policy: 4 00:04:49.503 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.503 EAL: request: mp_malloc_sync 00:04:49.503 EAL: No shared files mode enabled, IPC is disabled 00:04:49.503 EAL: Heap on socket 0 was expanded by 514MB 00:04:50.932 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.932 EAL: request: mp_malloc_sync 00:04:50.932 EAL: No shared files mode enabled, IPC is disabled 00:04:50.932 EAL: Heap on socket 0 was shrunk by 514MB 00:04:51.867 EAL: Trying to obtain current memory policy. 00:04:51.867 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.867 EAL: Restoring previous memory policy: 4 00:04:51.867 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.867 EAL: request: mp_malloc_sync 00:04:51.867 EAL: No shared files mode enabled, IPC is disabled 00:04:51.867 EAL: Heap on socket 0 was expanded by 1026MB 00:04:54.409 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.409 EAL: request: mp_malloc_sync 00:04:54.409 EAL: No shared files mode enabled, IPC is disabled 00:04:54.409 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:56.343 passed 00:04:56.343 00:04:56.343 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.343 suites 1 1 n/a 0 0 00:04:56.343 tests 2 2 2 0 0 00:04:56.343 asserts 5327 5327 5327 0 n/a 00:04:56.343 00:04:56.343 Elapsed time = 8.895 seconds 00:04:56.343 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.343 EAL: request: mp_malloc_sync 00:04:56.343 EAL: No shared files mode enabled, IPC is disabled 00:04:56.343 EAL: Heap on socket 0 was shrunk by 2MB 00:04:56.343 EAL: No shared files mode enabled, IPC is disabled 00:04:56.343 EAL: No shared files mode enabled, IPC is disabled 00:04:56.343 EAL: No shared files mode enabled, IPC is disabled 00:04:56.343 00:04:56.343 real 0m9.186s 00:04:56.343 user 0m8.235s 00:04:56.343 sys 0m0.791s 00:04:56.343 ************************************ 00:04:56.343 END TEST env_vtophys 00:04:56.343 ************************************ 00:04:56.343 15:00:33 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.343 15:00:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:56.343 15:00:33 env -- common/autotest_common.sh@1142 -- # return 0 00:04:56.343 15:00:33 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:56.343 15:00:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.343 15:00:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.343 15:00:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.343 ************************************ 00:04:56.343 START TEST env_pci 00:04:56.343 ************************************ 00:04:56.343 15:00:34 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:56.343 00:04:56.343 00:04:56.343 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.343 http://cunit.sourceforge.net/ 00:04:56.343 00:04:56.343 00:04:56.343 Suite: pci 00:04:56.343 Test: pci_hook ...[2024-07-15 15:00:34.059558] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 62180 has claimed it 00:04:56.343 passed 00:04:56.343 00:04:56.343 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.343 suites 1 1 n/a 0 0 00:04:56.343 tests 1 1 1 0 0 00:04:56.343 asserts 25 25 25 0 n/a 00:04:56.343 00:04:56.343 Elapsed time = 0.008 seconds 00:04:56.343 EAL: Cannot find device (10000:00:01.0) 00:04:56.343 EAL: Failed to attach device on primary process 00:04:56.343 00:04:56.343 real 0m0.099s 00:04:56.343 user 0m0.039s 00:04:56.343 sys 0m0.057s 00:04:56.343 15:00:34 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.343 15:00:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:56.343 ************************************ 00:04:56.343 END TEST env_pci 00:04:56.343 ************************************ 00:04:56.343 15:00:34 env -- common/autotest_common.sh@1142 -- # return 0 00:04:56.343 15:00:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:56.343 15:00:34 env -- env/env.sh@15 -- # uname 00:04:56.343 15:00:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:56.343 15:00:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:56.343 15:00:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.343 15:00:34 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:56.343 15:00:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.343 15:00:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.343 ************************************ 00:04:56.344 START TEST env_dpdk_post_init 00:04:56.344 ************************************ 00:04:56.344 15:00:34 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.344 EAL: Detected CPU lcores: 10 00:04:56.344 EAL: Detected NUMA nodes: 1 00:04:56.344 EAL: Detected shared linkage of DPDK 00:04:56.344 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.344 EAL: Selected IOVA mode 'PA' 00:04:56.344 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.344 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:56.344 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:56.344 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:56.344 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:56.603 Starting DPDK initialization... 00:04:56.603 Starting SPDK post initialization... 00:04:56.603 SPDK NVMe probe 00:04:56.603 Attaching to 0000:00:10.0 00:04:56.603 Attaching to 0000:00:11.0 00:04:56.603 Attaching to 0000:00:12.0 00:04:56.603 Attaching to 0000:00:13.0 00:04:56.603 Attached to 0000:00:13.0 00:04:56.603 Attached to 0000:00:10.0 00:04:56.603 Attached to 0000:00:11.0 00:04:56.603 Attached to 0000:00:12.0 00:04:56.603 Cleaning up... 00:04:56.603 00:04:56.603 real 0m0.296s 00:04:56.603 user 0m0.101s 00:04:56.603 sys 0m0.098s 00:04:56.603 15:00:34 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.603 15:00:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.603 ************************************ 00:04:56.603 END TEST env_dpdk_post_init 00:04:56.603 ************************************ 00:04:56.603 15:00:34 env -- common/autotest_common.sh@1142 -- # return 0 00:04:56.603 15:00:34 env -- env/env.sh@26 -- # uname 00:04:56.603 15:00:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:56.603 15:00:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.603 15:00:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.603 15:00:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.603 15:00:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.603 ************************************ 00:04:56.603 START TEST env_mem_callbacks 00:04:56.603 ************************************ 00:04:56.603 15:00:34 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.603 EAL: Detected CPU lcores: 10 00:04:56.603 EAL: Detected NUMA nodes: 1 00:04:56.603 EAL: Detected shared linkage of DPDK 00:04:56.603 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.603 EAL: Selected IOVA mode 'PA' 00:04:56.862 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.862 00:04:56.862 00:04:56.862 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.862 http://cunit.sourceforge.net/ 00:04:56.862 00:04:56.862 00:04:56.862 Suite: memory 00:04:56.862 Test: test ... 00:04:56.862 register 0x200000200000 2097152 00:04:56.862 malloc 3145728 00:04:56.862 register 0x200000400000 4194304 00:04:56.862 buf 0x2000004fffc0 len 3145728 PASSED 00:04:56.862 malloc 64 00:04:56.862 buf 0x2000004ffec0 len 64 PASSED 00:04:56.862 malloc 4194304 00:04:56.862 register 0x200000800000 6291456 00:04:56.862 buf 0x2000009fffc0 len 4194304 PASSED 00:04:56.862 free 0x2000004fffc0 3145728 00:04:56.862 free 0x2000004ffec0 64 00:04:56.862 unregister 0x200000400000 4194304 PASSED 00:04:56.862 free 0x2000009fffc0 4194304 00:04:56.862 unregister 0x200000800000 6291456 PASSED 00:04:56.862 malloc 8388608 00:04:56.862 register 0x200000400000 10485760 00:04:56.862 buf 0x2000005fffc0 len 8388608 PASSED 00:04:56.862 free 0x2000005fffc0 8388608 00:04:56.862 unregister 0x200000400000 10485760 PASSED 00:04:56.862 passed 00:04:56.862 00:04:56.862 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.862 suites 1 1 n/a 0 0 00:04:56.862 tests 1 1 1 0 0 00:04:56.862 asserts 15 15 15 0 n/a 00:04:56.862 00:04:56.862 Elapsed time = 0.089 seconds 00:04:56.862 00:04:56.862 real 0m0.292s 00:04:56.862 user 0m0.124s 00:04:56.862 sys 0m0.066s 00:04:56.862 15:00:34 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.862 15:00:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:56.862 ************************************ 00:04:56.862 END TEST env_mem_callbacks 00:04:56.862 ************************************ 00:04:56.862 15:00:34 env -- common/autotest_common.sh@1142 -- # return 0 00:04:56.862 00:04:56.862 real 0m10.612s 00:04:56.862 user 0m8.916s 00:04:56.862 sys 0m1.341s 00:04:56.862 15:00:34 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.862 15:00:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.862 ************************************ 00:04:56.862 END TEST env 00:04:56.862 ************************************ 00:04:56.862 15:00:34 -- common/autotest_common.sh@1142 -- # return 0 00:04:56.862 15:00:34 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:56.862 15:00:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.862 15:00:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.862 15:00:34 -- common/autotest_common.sh@10 -- # set +x 00:04:56.862 ************************************ 00:04:56.862 START TEST rpc 00:04:56.862 ************************************ 00:04:56.862 15:00:34 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:57.121 * Looking for test storage... 00:04:57.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:57.121 15:00:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62299 00:04:57.121 15:00:35 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:57.121 15:00:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.121 15:00:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62299 00:04:57.121 15:00:35 rpc -- common/autotest_common.sh@829 -- # '[' -z 62299 ']' 00:04:57.121 15:00:35 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.121 15:00:35 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.122 15:00:35 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.122 15:00:35 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.122 15:00:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.122 [2024-07-15 15:00:35.177117] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:04:57.122 [2024-07-15 15:00:35.177242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62299 ] 00:04:57.383 [2024-07-15 15:00:35.346066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.653 [2024-07-15 15:00:35.585827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:57.653 [2024-07-15 15:00:35.585880] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62299' to capture a snapshot of events at runtime. 00:04:57.653 [2024-07-15 15:00:35.585894] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:57.653 [2024-07-15 15:00:35.585903] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:57.653 [2024-07-15 15:00:35.585913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62299 for offline analysis/debug. 00:04:57.653 [2024-07-15 15:00:35.585952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.591 15:00:36 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.591 15:00:36 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:58.591 15:00:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:58.591 15:00:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:58.591 15:00:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:58.591 15:00:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:58.591 15:00:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.591 15:00:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.591 15:00:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.591 ************************************ 00:04:58.591 START TEST rpc_integrity 00:04:58.591 ************************************ 00:04:58.591 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:58.591 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.591 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.591 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.591 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.591 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.591 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:58.591 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.591 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.591 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.591 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.591 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.591 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:58.591 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.591 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.591 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.851 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.851 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.851 { 00:04:58.851 "name": "Malloc0", 00:04:58.851 "aliases": [ 00:04:58.851 "38503e4c-9a83-4463-87ca-7073bc434fec" 00:04:58.851 ], 00:04:58.851 "product_name": "Malloc disk", 00:04:58.851 "block_size": 512, 00:04:58.851 "num_blocks": 16384, 00:04:58.851 "uuid": "38503e4c-9a83-4463-87ca-7073bc434fec", 00:04:58.851 "assigned_rate_limits": { 00:04:58.851 "rw_ios_per_sec": 0, 00:04:58.851 "rw_mbytes_per_sec": 0, 00:04:58.851 "r_mbytes_per_sec": 0, 00:04:58.851 "w_mbytes_per_sec": 0 00:04:58.851 }, 00:04:58.851 "claimed": false, 00:04:58.851 "zoned": false, 00:04:58.851 "supported_io_types": { 00:04:58.851 "read": true, 00:04:58.851 "write": true, 00:04:58.851 "unmap": true, 00:04:58.851 "flush": true, 00:04:58.851 "reset": true, 00:04:58.851 "nvme_admin": false, 00:04:58.851 "nvme_io": false, 00:04:58.851 "nvme_io_md": false, 00:04:58.851 "write_zeroes": true, 00:04:58.851 "zcopy": true, 00:04:58.851 "get_zone_info": false, 00:04:58.851 "zone_management": false, 00:04:58.851 "zone_append": false, 00:04:58.851 "compare": false, 00:04:58.851 "compare_and_write": false, 00:04:58.851 "abort": true, 00:04:58.851 "seek_hole": false, 00:04:58.851 "seek_data": false, 00:04:58.851 "copy": true, 00:04:58.851 "nvme_iov_md": false 00:04:58.851 }, 00:04:58.851 "memory_domains": [ 00:04:58.851 { 00:04:58.851 "dma_device_id": "system", 00:04:58.851 "dma_device_type": 1 00:04:58.851 }, 00:04:58.851 { 00:04:58.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.851 "dma_device_type": 2 00:04:58.851 } 00:04:58.851 ], 00:04:58.851 "driver_specific": {} 00:04:58.851 } 00:04:58.851 ]' 00:04:58.851 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:58.851 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.851 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:58.851 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.851 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.851 [2024-07-15 15:00:36.781576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:58.851 [2024-07-15 15:00:36.781660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.851 [2024-07-15 15:00:36.781692] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:58.851 [2024-07-15 15:00:36.781703] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.851 [2024-07-15 15:00:36.784058] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.851 [2024-07-15 15:00:36.784096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.851 Passthru0 00:04:58.851 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.851 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.851 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.851 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.851 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.851 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.851 { 00:04:58.851 "name": "Malloc0", 00:04:58.851 "aliases": [ 00:04:58.851 "38503e4c-9a83-4463-87ca-7073bc434fec" 00:04:58.851 ], 00:04:58.851 "product_name": "Malloc disk", 00:04:58.851 "block_size": 512, 00:04:58.851 "num_blocks": 16384, 00:04:58.851 "uuid": "38503e4c-9a83-4463-87ca-7073bc434fec", 00:04:58.851 "assigned_rate_limits": { 00:04:58.851 "rw_ios_per_sec": 0, 00:04:58.851 "rw_mbytes_per_sec": 0, 00:04:58.851 "r_mbytes_per_sec": 0, 00:04:58.851 "w_mbytes_per_sec": 0 00:04:58.851 }, 00:04:58.851 "claimed": true, 00:04:58.851 "claim_type": "exclusive_write", 00:04:58.851 "zoned": false, 00:04:58.851 "supported_io_types": { 00:04:58.851 "read": true, 00:04:58.851 "write": true, 00:04:58.851 "unmap": true, 00:04:58.851 "flush": true, 00:04:58.851 "reset": true, 00:04:58.851 "nvme_admin": false, 00:04:58.851 "nvme_io": false, 00:04:58.851 "nvme_io_md": false, 00:04:58.851 "write_zeroes": true, 00:04:58.851 "zcopy": true, 00:04:58.851 "get_zone_info": false, 00:04:58.851 "zone_management": false, 00:04:58.851 "zone_append": false, 00:04:58.851 "compare": false, 00:04:58.851 "compare_and_write": false, 00:04:58.851 "abort": true, 00:04:58.851 "seek_hole": false, 00:04:58.851 "seek_data": false, 00:04:58.851 "copy": true, 00:04:58.851 "nvme_iov_md": false 00:04:58.851 }, 00:04:58.851 "memory_domains": [ 00:04:58.851 { 00:04:58.851 "dma_device_id": "system", 00:04:58.851 "dma_device_type": 1 00:04:58.851 }, 00:04:58.851 { 00:04:58.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.851 "dma_device_type": 2 00:04:58.851 } 00:04:58.851 ], 00:04:58.851 "driver_specific": {} 00:04:58.851 }, 00:04:58.851 { 00:04:58.851 "name": "Passthru0", 00:04:58.851 "aliases": [ 00:04:58.851 "aaad338c-5c6b-5e12-9f16-1f4d09bb92e7" 00:04:58.851 ], 00:04:58.851 "product_name": "passthru", 00:04:58.851 "block_size": 512, 00:04:58.851 "num_blocks": 16384, 00:04:58.851 "uuid": "aaad338c-5c6b-5e12-9f16-1f4d09bb92e7", 00:04:58.851 "assigned_rate_limits": { 00:04:58.851 "rw_ios_per_sec": 0, 00:04:58.851 "rw_mbytes_per_sec": 0, 00:04:58.851 "r_mbytes_per_sec": 0, 00:04:58.851 "w_mbytes_per_sec": 0 00:04:58.851 }, 00:04:58.851 "claimed": false, 00:04:58.851 "zoned": false, 00:04:58.851 "supported_io_types": { 00:04:58.851 "read": true, 00:04:58.851 "write": true, 00:04:58.851 "unmap": true, 00:04:58.851 "flush": true, 00:04:58.852 "reset": true, 00:04:58.852 "nvme_admin": false, 00:04:58.852 "nvme_io": false, 00:04:58.852 "nvme_io_md": false, 00:04:58.852 "write_zeroes": true, 00:04:58.852 "zcopy": true, 00:04:58.852 "get_zone_info": false, 00:04:58.852 "zone_management": false, 00:04:58.852 "zone_append": false, 00:04:58.852 "compare": false, 00:04:58.852 "compare_and_write": false, 00:04:58.852 "abort": true, 00:04:58.852 "seek_hole": false, 00:04:58.852 "seek_data": false, 00:04:58.852 "copy": true, 00:04:58.852 "nvme_iov_md": false 00:04:58.852 }, 00:04:58.852 "memory_domains": [ 00:04:58.852 { 00:04:58.852 "dma_device_id": "system", 00:04:58.852 "dma_device_type": 1 00:04:58.852 }, 00:04:58.852 { 00:04:58.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.852 "dma_device_type": 2 00:04:58.852 } 00:04:58.852 ], 00:04:58.852 "driver_specific": { 00:04:58.852 "passthru": { 00:04:58.852 "name": "Passthru0", 00:04:58.852 "base_bdev_name": "Malloc0" 00:04:58.852 } 00:04:58.852 } 00:04:58.852 } 00:04:58.852 ]' 00:04:58.852 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:58.852 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.852 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.852 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.852 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.852 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.852 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:58.852 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.852 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.852 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.852 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.852 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.852 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.852 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.852 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.852 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:59.108 15:00:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.108 00:04:59.108 real 0m0.387s 00:04:59.108 user 0m0.215s 00:04:59.108 sys 0m0.053s 00:04:59.108 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.108 15:00:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.108 ************************************ 00:04:59.108 END TEST rpc_integrity 00:04:59.108 ************************************ 00:04:59.108 15:00:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:59.108 15:00:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:59.108 15:00:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.108 15:00:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.108 15:00:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.108 ************************************ 00:04:59.108 START TEST rpc_plugins 00:04:59.108 ************************************ 00:04:59.108 15:00:37 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:59.108 15:00:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:59.108 15:00:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.108 15:00:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.108 15:00:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.108 15:00:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:59.108 15:00:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:59.108 15:00:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.108 15:00:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.108 15:00:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.108 15:00:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:59.108 { 00:04:59.108 "name": "Malloc1", 00:04:59.108 "aliases": [ 00:04:59.108 "b73ab5c7-4e40-4955-bef8-90d93738655e" 00:04:59.108 ], 00:04:59.108 "product_name": "Malloc disk", 00:04:59.108 "block_size": 4096, 00:04:59.108 "num_blocks": 256, 00:04:59.108 "uuid": "b73ab5c7-4e40-4955-bef8-90d93738655e", 00:04:59.108 "assigned_rate_limits": { 00:04:59.108 "rw_ios_per_sec": 0, 00:04:59.108 "rw_mbytes_per_sec": 0, 00:04:59.108 "r_mbytes_per_sec": 0, 00:04:59.108 "w_mbytes_per_sec": 0 00:04:59.108 }, 00:04:59.108 "claimed": false, 00:04:59.108 "zoned": false, 00:04:59.108 "supported_io_types": { 00:04:59.108 "read": true, 00:04:59.108 "write": true, 00:04:59.108 "unmap": true, 00:04:59.108 "flush": true, 00:04:59.108 "reset": true, 00:04:59.108 "nvme_admin": false, 00:04:59.108 "nvme_io": false, 00:04:59.108 "nvme_io_md": false, 00:04:59.108 "write_zeroes": true, 00:04:59.108 "zcopy": true, 00:04:59.108 "get_zone_info": false, 00:04:59.108 "zone_management": false, 00:04:59.108 "zone_append": false, 00:04:59.108 "compare": false, 00:04:59.108 "compare_and_write": false, 00:04:59.108 "abort": true, 00:04:59.108 "seek_hole": false, 00:04:59.108 "seek_data": false, 00:04:59.108 "copy": true, 00:04:59.108 "nvme_iov_md": false 00:04:59.108 }, 00:04:59.108 "memory_domains": [ 00:04:59.108 { 00:04:59.108 "dma_device_id": "system", 00:04:59.108 "dma_device_type": 1 00:04:59.108 }, 00:04:59.108 { 00:04:59.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.108 "dma_device_type": 2 00:04:59.108 } 00:04:59.108 ], 00:04:59.108 "driver_specific": {} 00:04:59.108 } 00:04:59.108 ]' 00:04:59.108 15:00:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:59.108 15:00:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:59.108 15:00:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:59.108 15:00:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.108 15:00:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.108 15:00:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.108 15:00:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:59.108 15:00:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.108 15:00:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.108 15:00:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.108 15:00:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:59.108 15:00:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:59.366 15:00:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:59.366 00:04:59.366 real 0m0.174s 00:04:59.366 user 0m0.100s 00:04:59.366 sys 0m0.028s 00:04:59.366 15:00:37 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.366 15:00:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.366 ************************************ 00:04:59.366 END TEST rpc_plugins 00:04:59.366 ************************************ 00:04:59.366 15:00:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:59.366 15:00:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:59.366 15:00:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.366 15:00:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.366 15:00:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.366 ************************************ 00:04:59.366 START TEST rpc_trace_cmd_test 00:04:59.366 ************************************ 00:04:59.366 15:00:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:59.366 15:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:59.366 15:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:59.366 15:00:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.366 15:00:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.366 15:00:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.366 15:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:59.366 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62299", 00:04:59.367 "tpoint_group_mask": "0x8", 00:04:59.367 "iscsi_conn": { 00:04:59.367 "mask": "0x2", 00:04:59.367 "tpoint_mask": "0x0" 00:04:59.367 }, 00:04:59.367 "scsi": { 00:04:59.367 "mask": "0x4", 00:04:59.367 "tpoint_mask": "0x0" 00:04:59.367 }, 00:04:59.367 "bdev": { 00:04:59.367 "mask": "0x8", 00:04:59.367 "tpoint_mask": "0xffffffffffffffff" 00:04:59.367 }, 00:04:59.367 "nvmf_rdma": { 00:04:59.367 "mask": "0x10", 00:04:59.367 "tpoint_mask": "0x0" 00:04:59.367 }, 00:04:59.367 "nvmf_tcp": { 00:04:59.367 "mask": "0x20", 00:04:59.367 "tpoint_mask": "0x0" 00:04:59.367 }, 00:04:59.367 "ftl": { 00:04:59.367 "mask": "0x40", 00:04:59.367 "tpoint_mask": "0x0" 00:04:59.367 }, 00:04:59.367 "blobfs": { 00:04:59.367 "mask": "0x80", 00:04:59.367 "tpoint_mask": "0x0" 00:04:59.367 }, 00:04:59.367 "dsa": { 00:04:59.367 "mask": "0x200", 00:04:59.367 "tpoint_mask": "0x0" 00:04:59.367 }, 00:04:59.367 "thread": { 00:04:59.367 "mask": "0x400", 00:04:59.367 "tpoint_mask": "0x0" 00:04:59.367 }, 00:04:59.367 "nvme_pcie": { 00:04:59.367 "mask": "0x800", 00:04:59.367 "tpoint_mask": "0x0" 00:04:59.367 }, 00:04:59.367 "iaa": { 00:04:59.367 "mask": "0x1000", 00:04:59.367 "tpoint_mask": "0x0" 00:04:59.367 }, 00:04:59.367 "nvme_tcp": { 00:04:59.367 "mask": "0x2000", 00:04:59.367 "tpoint_mask": "0x0" 00:04:59.367 }, 00:04:59.367 "bdev_nvme": { 00:04:59.367 "mask": "0x4000", 00:04:59.367 "tpoint_mask": "0x0" 00:04:59.367 }, 00:04:59.367 "sock": { 00:04:59.367 "mask": "0x8000", 00:04:59.367 "tpoint_mask": "0x0" 00:04:59.367 } 00:04:59.367 }' 00:04:59.367 15:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:59.367 15:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:59.367 15:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:59.367 15:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:59.367 15:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:59.367 15:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:59.367 15:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:59.625 15:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:59.625 15:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:59.625 ************************************ 00:04:59.625 END TEST rpc_trace_cmd_test 00:04:59.625 ************************************ 00:04:59.625 15:00:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:59.625 00:04:59.625 real 0m0.249s 00:04:59.625 user 0m0.196s 00:04:59.625 sys 0m0.042s 00:04:59.625 15:00:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.625 15:00:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.625 15:00:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:59.625 15:00:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:59.625 15:00:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:59.625 15:00:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:59.625 15:00:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.625 15:00:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.625 15:00:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.625 ************************************ 00:04:59.625 START TEST rpc_daemon_integrity 00:04:59.625 ************************************ 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.625 { 00:04:59.625 "name": "Malloc2", 00:04:59.625 "aliases": [ 00:04:59.625 "0ad57111-e626-4d21-af8b-3050cd2da471" 00:04:59.625 ], 00:04:59.625 "product_name": "Malloc disk", 00:04:59.625 "block_size": 512, 00:04:59.625 "num_blocks": 16384, 00:04:59.625 "uuid": "0ad57111-e626-4d21-af8b-3050cd2da471", 00:04:59.625 "assigned_rate_limits": { 00:04:59.625 "rw_ios_per_sec": 0, 00:04:59.625 "rw_mbytes_per_sec": 0, 00:04:59.625 "r_mbytes_per_sec": 0, 00:04:59.625 "w_mbytes_per_sec": 0 00:04:59.625 }, 00:04:59.625 "claimed": false, 00:04:59.625 "zoned": false, 00:04:59.625 "supported_io_types": { 00:04:59.625 "read": true, 00:04:59.625 "write": true, 00:04:59.625 "unmap": true, 00:04:59.625 "flush": true, 00:04:59.625 "reset": true, 00:04:59.625 "nvme_admin": false, 00:04:59.625 "nvme_io": false, 00:04:59.625 "nvme_io_md": false, 00:04:59.625 "write_zeroes": true, 00:04:59.625 "zcopy": true, 00:04:59.625 "get_zone_info": false, 00:04:59.625 "zone_management": false, 00:04:59.625 "zone_append": false, 00:04:59.625 "compare": false, 00:04:59.625 "compare_and_write": false, 00:04:59.625 "abort": true, 00:04:59.625 "seek_hole": false, 00:04:59.625 "seek_data": false, 00:04:59.625 "copy": true, 00:04:59.625 "nvme_iov_md": false 00:04:59.625 }, 00:04:59.625 "memory_domains": [ 00:04:59.625 { 00:04:59.625 "dma_device_id": "system", 00:04:59.625 "dma_device_type": 1 00:04:59.625 }, 00:04:59.625 { 00:04:59.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.625 "dma_device_type": 2 00:04:59.625 } 00:04:59.625 ], 00:04:59.625 "driver_specific": {} 00:04:59.625 } 00:04:59.625 ]' 00:04:59.625 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.883 [2024-07-15 15:00:37.765451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:59.883 [2024-07-15 15:00:37.765523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.883 [2024-07-15 15:00:37.765551] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:59.883 [2024-07-15 15:00:37.765561] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.883 [2024-07-15 15:00:37.768074] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.883 [2024-07-15 15:00:37.768132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.883 Passthru0 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.883 { 00:04:59.883 "name": "Malloc2", 00:04:59.883 "aliases": [ 00:04:59.883 "0ad57111-e626-4d21-af8b-3050cd2da471" 00:04:59.883 ], 00:04:59.883 "product_name": "Malloc disk", 00:04:59.883 "block_size": 512, 00:04:59.883 "num_blocks": 16384, 00:04:59.883 "uuid": "0ad57111-e626-4d21-af8b-3050cd2da471", 00:04:59.883 "assigned_rate_limits": { 00:04:59.883 "rw_ios_per_sec": 0, 00:04:59.883 "rw_mbytes_per_sec": 0, 00:04:59.883 "r_mbytes_per_sec": 0, 00:04:59.883 "w_mbytes_per_sec": 0 00:04:59.883 }, 00:04:59.883 "claimed": true, 00:04:59.883 "claim_type": "exclusive_write", 00:04:59.883 "zoned": false, 00:04:59.883 "supported_io_types": { 00:04:59.883 "read": true, 00:04:59.883 "write": true, 00:04:59.883 "unmap": true, 00:04:59.883 "flush": true, 00:04:59.883 "reset": true, 00:04:59.883 "nvme_admin": false, 00:04:59.883 "nvme_io": false, 00:04:59.883 "nvme_io_md": false, 00:04:59.883 "write_zeroes": true, 00:04:59.883 "zcopy": true, 00:04:59.883 "get_zone_info": false, 00:04:59.883 "zone_management": false, 00:04:59.883 "zone_append": false, 00:04:59.883 "compare": false, 00:04:59.883 "compare_and_write": false, 00:04:59.883 "abort": true, 00:04:59.883 "seek_hole": false, 00:04:59.883 "seek_data": false, 00:04:59.883 "copy": true, 00:04:59.883 "nvme_iov_md": false 00:04:59.883 }, 00:04:59.883 "memory_domains": [ 00:04:59.883 { 00:04:59.883 "dma_device_id": "system", 00:04:59.883 "dma_device_type": 1 00:04:59.883 }, 00:04:59.883 { 00:04:59.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.883 "dma_device_type": 2 00:04:59.883 } 00:04:59.883 ], 00:04:59.883 "driver_specific": {} 00:04:59.883 }, 00:04:59.883 { 00:04:59.883 "name": "Passthru0", 00:04:59.883 "aliases": [ 00:04:59.883 "687adb2d-dd81-5ea5-b297-9c3fcb3262bb" 00:04:59.883 ], 00:04:59.883 "product_name": "passthru", 00:04:59.883 "block_size": 512, 00:04:59.883 "num_blocks": 16384, 00:04:59.883 "uuid": "687adb2d-dd81-5ea5-b297-9c3fcb3262bb", 00:04:59.883 "assigned_rate_limits": { 00:04:59.883 "rw_ios_per_sec": 0, 00:04:59.883 "rw_mbytes_per_sec": 0, 00:04:59.883 "r_mbytes_per_sec": 0, 00:04:59.883 "w_mbytes_per_sec": 0 00:04:59.883 }, 00:04:59.883 "claimed": false, 00:04:59.883 "zoned": false, 00:04:59.883 "supported_io_types": { 00:04:59.883 "read": true, 00:04:59.883 "write": true, 00:04:59.883 "unmap": true, 00:04:59.883 "flush": true, 00:04:59.883 "reset": true, 00:04:59.883 "nvme_admin": false, 00:04:59.883 "nvme_io": false, 00:04:59.883 "nvme_io_md": false, 00:04:59.883 "write_zeroes": true, 00:04:59.883 "zcopy": true, 00:04:59.883 "get_zone_info": false, 00:04:59.883 "zone_management": false, 00:04:59.883 "zone_append": false, 00:04:59.883 "compare": false, 00:04:59.883 "compare_and_write": false, 00:04:59.883 "abort": true, 00:04:59.883 "seek_hole": false, 00:04:59.883 "seek_data": false, 00:04:59.883 "copy": true, 00:04:59.883 "nvme_iov_md": false 00:04:59.883 }, 00:04:59.883 "memory_domains": [ 00:04:59.883 { 00:04:59.883 "dma_device_id": "system", 00:04:59.883 "dma_device_type": 1 00:04:59.883 }, 00:04:59.883 { 00:04:59.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.883 "dma_device_type": 2 00:04:59.883 } 00:04:59.883 ], 00:04:59.883 "driver_specific": { 00:04:59.883 "passthru": { 00:04:59.883 "name": "Passthru0", 00:04:59.883 "base_bdev_name": "Malloc2" 00:04:59.883 } 00:04:59.883 } 00:04:59.883 } 00:04:59.883 ]' 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:59.883 ************************************ 00:04:59.883 END TEST rpc_daemon_integrity 00:04:59.883 ************************************ 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.883 00:04:59.883 real 0m0.348s 00:04:59.883 user 0m0.200s 00:04:59.883 sys 0m0.041s 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.883 15:00:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.142 15:00:38 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:00.142 15:00:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:00.142 15:00:38 rpc -- rpc/rpc.sh@84 -- # killprocess 62299 00:05:00.142 15:00:38 rpc -- common/autotest_common.sh@948 -- # '[' -z 62299 ']' 00:05:00.142 15:00:38 rpc -- common/autotest_common.sh@952 -- # kill -0 62299 00:05:00.142 15:00:38 rpc -- common/autotest_common.sh@953 -- # uname 00:05:00.142 15:00:38 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.142 15:00:38 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62299 00:05:00.142 15:00:38 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:00.142 killing process with pid 62299 00:05:00.142 15:00:38 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:00.142 15:00:38 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62299' 00:05:00.142 15:00:38 rpc -- common/autotest_common.sh@967 -- # kill 62299 00:05:00.142 15:00:38 rpc -- common/autotest_common.sh@972 -- # wait 62299 00:05:02.681 ************************************ 00:05:02.681 END TEST rpc 00:05:02.681 ************************************ 00:05:02.681 00:05:02.681 real 0m5.772s 00:05:02.681 user 0m6.338s 00:05:02.681 sys 0m0.881s 00:05:02.681 15:00:40 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.681 15:00:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.681 15:00:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:02.681 15:00:40 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:02.681 15:00:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.681 15:00:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.681 15:00:40 -- common/autotest_common.sh@10 -- # set +x 00:05:02.681 ************************************ 00:05:02.681 START TEST skip_rpc 00:05:02.681 ************************************ 00:05:02.681 15:00:40 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:02.939 * Looking for test storage... 00:05:02.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.939 15:00:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.939 15:00:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:02.939 15:00:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:02.939 15:00:40 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.939 15:00:40 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.939 15:00:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.939 ************************************ 00:05:02.939 START TEST skip_rpc 00:05:02.939 ************************************ 00:05:02.939 15:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:02.939 15:00:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62521 00:05:02.939 15:00:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:02.939 15:00:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.939 15:00:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:02.939 [2024-07-15 15:00:41.023029] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:05:02.939 [2024-07-15 15:00:41.023143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62521 ] 00:05:03.198 [2024-07-15 15:00:41.187528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.457 [2024-07-15 15:00:41.492444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62521 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 62521 ']' 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 62521 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62521 00:05:08.743 killing process with pid 62521 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62521' 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 62521 00:05:08.743 15:00:45 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 62521 00:05:11.313 00:05:11.313 real 0m8.102s 00:05:11.313 user 0m7.466s 00:05:11.313 sys 0m0.545s 00:05:11.313 15:00:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.313 15:00:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.313 ************************************ 00:05:11.313 END TEST skip_rpc 00:05:11.313 ************************************ 00:05:11.313 15:00:49 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:11.313 15:00:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:11.313 15:00:49 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.313 15:00:49 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.313 15:00:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.313 ************************************ 00:05:11.313 START TEST skip_rpc_with_json 00:05:11.313 ************************************ 00:05:11.313 15:00:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:11.313 15:00:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:11.314 15:00:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62636 00:05:11.314 15:00:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.314 15:00:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.314 15:00:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62636 00:05:11.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.314 15:00:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 62636 ']' 00:05:11.314 15:00:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.314 15:00:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.314 15:00:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.314 15:00:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.314 15:00:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.314 [2024-07-15 15:00:49.192071] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:05:11.314 [2024-07-15 15:00:49.192225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62636 ] 00:05:11.314 [2024-07-15 15:00:49.358102] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.580 [2024-07-15 15:00:49.665391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.968 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.968 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:12.968 15:00:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:12.968 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.968 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.968 [2024-07-15 15:00:50.779219] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:12.968 request: 00:05:12.968 { 00:05:12.968 "trtype": "tcp", 00:05:12.968 "method": "nvmf_get_transports", 00:05:12.968 "req_id": 1 00:05:12.968 } 00:05:12.968 Got JSON-RPC error response 00:05:12.968 response: 00:05:12.968 { 00:05:12.968 "code": -19, 00:05:12.968 "message": "No such device" 00:05:12.968 } 00:05:12.968 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:12.968 15:00:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:12.968 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.968 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.968 [2024-07-15 15:00:50.791291] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.968 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.968 15:00:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:12.968 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.968 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.968 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.968 15:00:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:12.968 { 00:05:12.968 "subsystems": [ 00:05:12.968 { 00:05:12.968 "subsystem": "keyring", 00:05:12.968 "config": [] 00:05:12.968 }, 00:05:12.968 { 00:05:12.968 "subsystem": "iobuf", 00:05:12.968 "config": [ 00:05:12.968 { 00:05:12.968 "method": "iobuf_set_options", 00:05:12.968 "params": { 00:05:12.968 "small_pool_count": 8192, 00:05:12.968 "large_pool_count": 1024, 00:05:12.968 "small_bufsize": 8192, 00:05:12.968 "large_bufsize": 135168 00:05:12.968 } 00:05:12.968 } 00:05:12.968 ] 00:05:12.968 }, 00:05:12.968 { 00:05:12.968 "subsystem": "sock", 00:05:12.968 "config": [ 00:05:12.968 { 00:05:12.968 "method": "sock_set_default_impl", 00:05:12.968 "params": { 00:05:12.968 "impl_name": "posix" 00:05:12.968 } 00:05:12.968 }, 00:05:12.968 { 00:05:12.968 "method": "sock_impl_set_options", 00:05:12.968 "params": { 00:05:12.968 "impl_name": "ssl", 00:05:12.968 "recv_buf_size": 4096, 00:05:12.968 "send_buf_size": 4096, 00:05:12.968 "enable_recv_pipe": true, 00:05:12.968 "enable_quickack": false, 00:05:12.968 "enable_placement_id": 0, 00:05:12.968 "enable_zerocopy_send_server": true, 00:05:12.968 "enable_zerocopy_send_client": false, 00:05:12.968 "zerocopy_threshold": 0, 00:05:12.968 "tls_version": 0, 00:05:12.968 "enable_ktls": false 00:05:12.968 } 00:05:12.968 }, 00:05:12.968 { 00:05:12.968 "method": "sock_impl_set_options", 00:05:12.968 "params": { 00:05:12.968 "impl_name": "posix", 00:05:12.968 "recv_buf_size": 2097152, 00:05:12.968 "send_buf_size": 2097152, 00:05:12.968 "enable_recv_pipe": true, 00:05:12.968 "enable_quickack": false, 00:05:12.968 "enable_placement_id": 0, 00:05:12.968 "enable_zerocopy_send_server": true, 00:05:12.968 "enable_zerocopy_send_client": false, 00:05:12.968 "zerocopy_threshold": 0, 00:05:12.968 "tls_version": 0, 00:05:12.968 "enable_ktls": false 00:05:12.968 } 00:05:12.968 } 00:05:12.968 ] 00:05:12.968 }, 00:05:12.968 { 00:05:12.968 "subsystem": "vmd", 00:05:12.968 "config": [] 00:05:12.968 }, 00:05:12.968 { 00:05:12.968 "subsystem": "accel", 00:05:12.968 "config": [ 00:05:12.968 { 00:05:12.968 "method": "accel_set_options", 00:05:12.968 "params": { 00:05:12.968 "small_cache_size": 128, 00:05:12.968 "large_cache_size": 16, 00:05:12.968 "task_count": 2048, 00:05:12.968 "sequence_count": 2048, 00:05:12.968 "buf_count": 2048 00:05:12.968 } 00:05:12.968 } 00:05:12.968 ] 00:05:12.968 }, 00:05:12.968 { 00:05:12.968 "subsystem": "bdev", 00:05:12.968 "config": [ 00:05:12.968 { 00:05:12.968 "method": "bdev_set_options", 00:05:12.968 "params": { 00:05:12.968 "bdev_io_pool_size": 65535, 00:05:12.968 "bdev_io_cache_size": 256, 00:05:12.968 "bdev_auto_examine": true, 00:05:12.968 "iobuf_small_cache_size": 128, 00:05:12.968 "iobuf_large_cache_size": 16 00:05:12.968 } 00:05:12.968 }, 00:05:12.968 { 00:05:12.968 "method": "bdev_raid_set_options", 00:05:12.968 "params": { 00:05:12.968 "process_window_size_kb": 1024 00:05:12.968 } 00:05:12.968 }, 00:05:12.968 { 00:05:12.968 "method": "bdev_iscsi_set_options", 00:05:12.968 "params": { 00:05:12.968 "timeout_sec": 30 00:05:12.968 } 00:05:12.968 }, 00:05:12.968 { 00:05:12.968 "method": "bdev_nvme_set_options", 00:05:12.968 "params": { 00:05:12.968 "action_on_timeout": "none", 00:05:12.968 "timeout_us": 0, 00:05:12.968 "timeout_admin_us": 0, 00:05:12.968 "keep_alive_timeout_ms": 10000, 00:05:12.968 "arbitration_burst": 0, 00:05:12.968 "low_priority_weight": 0, 00:05:12.968 "medium_priority_weight": 0, 00:05:12.968 "high_priority_weight": 0, 00:05:12.968 "nvme_adminq_poll_period_us": 10000, 00:05:12.968 "nvme_ioq_poll_period_us": 0, 00:05:12.968 "io_queue_requests": 0, 00:05:12.968 "delay_cmd_submit": true, 00:05:12.968 "transport_retry_count": 4, 00:05:12.968 "bdev_retry_count": 3, 00:05:12.968 "transport_ack_timeout": 0, 00:05:12.968 "ctrlr_loss_timeout_sec": 0, 00:05:12.968 "reconnect_delay_sec": 0, 00:05:12.968 "fast_io_fail_timeout_sec": 0, 00:05:12.968 "disable_auto_failback": false, 00:05:12.968 "generate_uuids": false, 00:05:12.968 "transport_tos": 0, 00:05:12.968 "nvme_error_stat": false, 00:05:12.969 "rdma_srq_size": 0, 00:05:12.969 "io_path_stat": false, 00:05:12.969 "allow_accel_sequence": false, 00:05:12.969 "rdma_max_cq_size": 0, 00:05:12.969 "rdma_cm_event_timeout_ms": 0, 00:05:12.969 "dhchap_digests": [ 00:05:12.969 "sha256", 00:05:12.969 "sha384", 00:05:12.969 "sha512" 00:05:12.969 ], 00:05:12.969 "dhchap_dhgroups": [ 00:05:12.969 "null", 00:05:12.969 "ffdhe2048", 00:05:12.969 "ffdhe3072", 00:05:12.969 "ffdhe4096", 00:05:12.969 "ffdhe6144", 00:05:12.969 "ffdhe8192" 00:05:12.969 ] 00:05:12.969 } 00:05:12.969 }, 00:05:12.969 { 00:05:12.969 "method": "bdev_nvme_set_hotplug", 00:05:12.969 "params": { 00:05:12.969 "period_us": 100000, 00:05:12.969 "enable": false 00:05:12.969 } 00:05:12.969 }, 00:05:12.969 { 00:05:12.969 "method": "bdev_wait_for_examine" 00:05:12.969 } 00:05:12.969 ] 00:05:12.969 }, 00:05:12.969 { 00:05:12.969 "subsystem": "scsi", 00:05:12.969 "config": null 00:05:12.969 }, 00:05:12.969 { 00:05:12.969 "subsystem": "scheduler", 00:05:12.969 "config": [ 00:05:12.969 { 00:05:12.969 "method": "framework_set_scheduler", 00:05:12.969 "params": { 00:05:12.969 "name": "static" 00:05:12.969 } 00:05:12.969 } 00:05:12.969 ] 00:05:12.969 }, 00:05:12.969 { 00:05:12.969 "subsystem": "vhost_scsi", 00:05:12.969 "config": [] 00:05:12.969 }, 00:05:12.969 { 00:05:12.969 "subsystem": "vhost_blk", 00:05:12.969 "config": [] 00:05:12.969 }, 00:05:12.969 { 00:05:12.969 "subsystem": "ublk", 00:05:12.969 "config": [] 00:05:12.969 }, 00:05:12.969 { 00:05:12.969 "subsystem": "nbd", 00:05:12.969 "config": [] 00:05:12.969 }, 00:05:12.969 { 00:05:12.969 "subsystem": "nvmf", 00:05:12.969 "config": [ 00:05:12.969 { 00:05:12.969 "method": "nvmf_set_config", 00:05:12.969 "params": { 00:05:12.969 "discovery_filter": "match_any", 00:05:12.969 "admin_cmd_passthru": { 00:05:12.969 "identify_ctrlr": false 00:05:12.969 } 00:05:12.969 } 00:05:12.969 }, 00:05:12.969 { 00:05:12.969 "method": "nvmf_set_max_subsystems", 00:05:12.969 "params": { 00:05:12.969 "max_subsystems": 1024 00:05:12.969 } 00:05:12.969 }, 00:05:12.969 { 00:05:12.969 "method": "nvmf_set_crdt", 00:05:12.969 "params": { 00:05:12.969 "crdt1": 0, 00:05:12.969 "crdt2": 0, 00:05:12.969 "crdt3": 0 00:05:12.969 } 00:05:12.969 }, 00:05:12.969 { 00:05:12.969 "method": "nvmf_create_transport", 00:05:12.969 "params": { 00:05:12.969 "trtype": "TCP", 00:05:12.969 "max_queue_depth": 128, 00:05:12.969 "max_io_qpairs_per_ctrlr": 127, 00:05:12.969 "in_capsule_data_size": 4096, 00:05:12.969 "max_io_size": 131072, 00:05:12.969 "io_unit_size": 131072, 00:05:12.969 "max_aq_depth": 128, 00:05:12.969 "num_shared_buffers": 511, 00:05:12.969 "buf_cache_size": 4294967295, 00:05:12.969 "dif_insert_or_strip": false, 00:05:12.969 "zcopy": false, 00:05:12.969 "c2h_success": true, 00:05:12.969 "sock_priority": 0, 00:05:12.969 "abort_timeout_sec": 1, 00:05:12.969 "ack_timeout": 0, 00:05:12.969 "data_wr_pool_size": 0 00:05:12.969 } 00:05:12.969 } 00:05:12.969 ] 00:05:12.969 }, 00:05:12.969 { 00:05:12.969 "subsystem": "iscsi", 00:05:12.969 "config": [ 00:05:12.969 { 00:05:12.969 "method": "iscsi_set_options", 00:05:12.969 "params": { 00:05:12.969 "node_base": "iqn.2016-06.io.spdk", 00:05:12.969 "max_sessions": 128, 00:05:12.969 "max_connections_per_session": 2, 00:05:12.969 "max_queue_depth": 64, 00:05:12.969 "default_time2wait": 2, 00:05:12.969 "default_time2retain": 20, 00:05:12.969 "first_burst_length": 8192, 00:05:12.969 "immediate_data": true, 00:05:12.969 "allow_duplicated_isid": false, 00:05:12.969 "error_recovery_level": 0, 00:05:12.969 "nop_timeout": 60, 00:05:12.969 "nop_in_interval": 30, 00:05:12.969 "disable_chap": false, 00:05:12.969 "require_chap": false, 00:05:12.969 "mutual_chap": false, 00:05:12.969 "chap_group": 0, 00:05:12.969 "max_large_datain_per_connection": 64, 00:05:12.969 "max_r2t_per_connection": 4, 00:05:12.969 "pdu_pool_size": 36864, 00:05:12.969 "immediate_data_pool_size": 16384, 00:05:12.969 "data_out_pool_size": 2048 00:05:12.969 } 00:05:12.969 } 00:05:12.969 ] 00:05:12.969 } 00:05:12.969 ] 00:05:12.969 } 00:05:12.969 15:00:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:12.969 15:00:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62636 00:05:12.969 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62636 ']' 00:05:12.969 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62636 00:05:12.969 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:12.969 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.969 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62636 00:05:12.969 killing process with pid 62636 00:05:12.969 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.969 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.969 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62636' 00:05:12.969 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62636 00:05:12.969 15:00:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62636 00:05:16.262 15:00:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62698 00:05:16.262 15:00:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:16.262 15:00:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:21.512 15:00:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62698 00:05:21.512 15:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62698 ']' 00:05:21.513 15:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62698 00:05:21.513 15:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:21.513 15:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.513 15:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62698 00:05:21.513 killing process with pid 62698 00:05:21.513 15:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.513 15:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.513 15:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62698' 00:05:21.513 15:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62698 00:05:21.513 15:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62698 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:23.413 ************************************ 00:05:23.413 END TEST skip_rpc_with_json 00:05:23.413 ************************************ 00:05:23.413 00:05:23.413 real 0m12.354s 00:05:23.413 user 0m11.606s 00:05:23.413 sys 0m0.964s 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.413 15:01:01 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:23.413 15:01:01 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:23.413 15:01:01 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.413 15:01:01 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.413 15:01:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.413 ************************************ 00:05:23.413 START TEST skip_rpc_with_delay 00:05:23.413 ************************************ 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:23.413 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.670 [2024-07-15 15:01:01.622131] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:23.670 [2024-07-15 15:01:01.622279] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:23.670 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:23.670 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.670 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.670 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.670 00:05:23.670 real 0m0.181s 00:05:23.670 user 0m0.093s 00:05:23.670 sys 0m0.086s 00:05:23.670 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.670 15:01:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:23.670 ************************************ 00:05:23.670 END TEST skip_rpc_with_delay 00:05:23.670 ************************************ 00:05:23.670 15:01:01 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:23.670 15:01:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:23.670 15:01:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:23.670 15:01:01 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:23.670 15:01:01 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.670 15:01:01 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.670 15:01:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.670 ************************************ 00:05:23.670 START TEST exit_on_failed_rpc_init 00:05:23.670 ************************************ 00:05:23.670 15:01:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:23.670 15:01:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62837 00:05:23.670 15:01:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.671 15:01:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62837 00:05:23.671 15:01:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 62837 ']' 00:05:23.671 15:01:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.671 15:01:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.671 15:01:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.671 15:01:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.671 15:01:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.927 [2024-07-15 15:01:01.866358] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:05:23.927 [2024-07-15 15:01:01.866507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62837 ] 00:05:23.927 [2024-07-15 15:01:02.036434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.492 [2024-07-15 15:01:02.296266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.436 15:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.436 15:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:25.436 15:01:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.436 15:01:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.436 15:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:25.436 15:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.436 15:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.436 15:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.436 15:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.436 15:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.436 15:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.436 15:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.436 15:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.436 15:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:25.436 15:01:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.436 [2024-07-15 15:01:03.355871] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:05:25.436 [2024-07-15 15:01:03.355986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62855 ] 00:05:25.436 [2024-07-15 15:01:03.520077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.692 [2024-07-15 15:01:03.781137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.692 [2024-07-15 15:01:03.781241] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:25.692 [2024-07-15 15:01:03.781258] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:25.692 [2024-07-15 15:01:03.781271] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62837 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 62837 ']' 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 62837 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62837 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62837' 00:05:26.258 killing process with pid 62837 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 62837 00:05:26.258 15:01:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 62837 00:05:29.579 00:05:29.579 real 0m5.217s 00:05:29.579 user 0m5.882s 00:05:29.579 sys 0m0.576s 00:05:29.579 15:01:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.579 15:01:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.579 ************************************ 00:05:29.579 END TEST exit_on_failed_rpc_init 00:05:29.579 ************************************ 00:05:29.579 15:01:07 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:29.579 15:01:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:29.579 00:05:29.579 real 0m26.244s 00:05:29.579 user 0m25.166s 00:05:29.579 sys 0m2.451s 00:05:29.579 15:01:07 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.579 15:01:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.579 ************************************ 00:05:29.579 END TEST skip_rpc 00:05:29.579 ************************************ 00:05:29.579 15:01:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:29.579 15:01:07 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:29.579 15:01:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.579 15:01:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.579 15:01:07 -- common/autotest_common.sh@10 -- # set +x 00:05:29.580 ************************************ 00:05:29.580 START TEST rpc_client 00:05:29.580 ************************************ 00:05:29.580 15:01:07 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:29.580 * Looking for test storage... 00:05:29.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:29.580 15:01:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:29.580 OK 00:05:29.580 15:01:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:29.580 00:05:29.580 real 0m0.185s 00:05:29.580 user 0m0.078s 00:05:29.580 sys 0m0.116s 00:05:29.580 15:01:07 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.580 15:01:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:29.580 ************************************ 00:05:29.580 END TEST rpc_client 00:05:29.580 ************************************ 00:05:29.580 15:01:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:29.580 15:01:07 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:29.580 15:01:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.580 15:01:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.580 15:01:07 -- common/autotest_common.sh@10 -- # set +x 00:05:29.580 ************************************ 00:05:29.580 START TEST json_config 00:05:29.580 ************************************ 00:05:29.580 15:01:07 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:29.580 15:01:07 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7fb24378-06dc-4546-ad9f-378969c62fd9 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7fb24378-06dc-4546-ad9f-378969c62fd9 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:29.580 15:01:07 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.580 15:01:07 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.580 15:01:07 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.580 15:01:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.580 15:01:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.580 15:01:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.580 15:01:07 json_config -- paths/export.sh@5 -- # export PATH 00:05:29.580 15:01:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@47 -- # : 0 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:29.580 15:01:07 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:29.580 WARNING: No tests are enabled so not running JSON configuration tests 00:05:29.580 15:01:07 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:29.580 15:01:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:29.580 15:01:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:29.580 15:01:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:29.580 15:01:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:29.580 15:01:07 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:29.580 15:01:07 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:29.580 00:05:29.580 real 0m0.110s 00:05:29.580 user 0m0.060s 00:05:29.580 sys 0m0.049s 00:05:29.580 ************************************ 00:05:29.580 END TEST json_config 00:05:29.580 ************************************ 00:05:29.580 15:01:07 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.580 15:01:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.580 15:01:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:29.580 15:01:07 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:29.580 15:01:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.580 15:01:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.580 15:01:07 -- common/autotest_common.sh@10 -- # set +x 00:05:29.580 ************************************ 00:05:29.580 START TEST json_config_extra_key 00:05:29.580 ************************************ 00:05:29.580 15:01:07 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:29.580 15:01:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7fb24378-06dc-4546-ad9f-378969c62fd9 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7fb24378-06dc-4546-ad9f-378969c62fd9 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.580 15:01:07 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:29.580 15:01:07 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.580 15:01:07 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.580 15:01:07 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.580 15:01:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.581 15:01:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.581 15:01:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.581 15:01:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:29.581 15:01:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.581 15:01:07 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:29.581 15:01:07 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:29.581 15:01:07 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:29.581 15:01:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.581 15:01:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.581 15:01:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.581 15:01:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:29.581 15:01:07 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:29.581 15:01:07 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:29.581 15:01:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:29.581 15:01:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:29.581 15:01:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:29.581 15:01:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:29.581 15:01:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:29.581 15:01:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:29.581 15:01:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:29.581 15:01:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:29.581 15:01:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:29.581 15:01:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:29.581 INFO: launching applications... 00:05:29.581 15:01:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:29.581 15:01:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:29.581 15:01:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:29.581 15:01:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:29.581 15:01:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:29.581 15:01:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:29.581 15:01:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:29.581 15:01:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.581 15:01:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.581 15:01:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=63041 00:05:29.581 Waiting for target to run... 00:05:29.581 15:01:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:29.581 15:01:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 63041 /var/tmp/spdk_tgt.sock 00:05:29.581 15:01:07 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 63041 ']' 00:05:29.581 15:01:07 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.581 15:01:07 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.581 15:01:07 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:29.581 15:01:07 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.581 15:01:07 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.581 15:01:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:29.842 [2024-07-15 15:01:07.735576] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:05:29.842 [2024-07-15 15:01:07.735796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63041 ] 00:05:30.102 [2024-07-15 15:01:08.120584] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.361 [2024-07-15 15:01:08.340330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.300 15:01:09 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.300 15:01:09 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:31.300 15:01:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:31.300 00:05:31.300 15:01:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:31.300 INFO: shutting down applications... 00:05:31.300 15:01:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:31.300 15:01:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:31.300 15:01:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:31.300 15:01:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 63041 ]] 00:05:31.300 15:01:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 63041 00:05:31.300 15:01:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:31.300 15:01:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.300 15:01:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63041 00:05:31.300 15:01:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:31.559 15:01:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:31.559 15:01:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.559 15:01:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63041 00:05:31.559 15:01:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.158 15:01:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.158 15:01:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.158 15:01:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63041 00:05:32.158 15:01:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.741 15:01:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.741 15:01:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.741 15:01:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63041 00:05:32.741 15:01:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.310 15:01:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.310 15:01:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.310 15:01:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63041 00:05:33.310 15:01:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.880 15:01:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.880 15:01:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.880 15:01:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63041 00:05:33.880 15:01:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.138 15:01:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.138 15:01:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.138 15:01:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63041 00:05:34.138 SPDK target shutdown done 00:05:34.138 Success 00:05:34.138 15:01:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:34.138 15:01:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:34.138 15:01:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:34.138 15:01:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:34.138 15:01:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:34.138 00:05:34.138 real 0m4.695s 00:05:34.138 user 0m4.422s 00:05:34.138 sys 0m0.556s 00:05:34.138 15:01:12 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.138 15:01:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:34.138 ************************************ 00:05:34.138 END TEST json_config_extra_key 00:05:34.138 ************************************ 00:05:34.138 15:01:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:34.139 15:01:12 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:34.139 15:01:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.139 15:01:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.139 15:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:34.401 ************************************ 00:05:34.401 START TEST alias_rpc 00:05:34.401 ************************************ 00:05:34.401 15:01:12 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:34.401 * Looking for test storage... 00:05:34.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:34.401 15:01:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:34.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.401 15:01:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=63150 00:05:34.401 15:01:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.401 15:01:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 63150 00:05:34.401 15:01:12 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 63150 ']' 00:05:34.401 15:01:12 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.401 15:01:12 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.401 15:01:12 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.401 15:01:12 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.401 15:01:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.401 [2024-07-15 15:01:12.477314] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:05:34.401 [2024-07-15 15:01:12.477445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63150 ] 00:05:34.669 [2024-07-15 15:01:12.643691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.929 [2024-07-15 15:01:12.881586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.865 15:01:13 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.865 15:01:13 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:35.865 15:01:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:36.123 15:01:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 63150 00:05:36.123 15:01:14 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 63150 ']' 00:05:36.123 15:01:14 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 63150 00:05:36.123 15:01:14 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:36.123 15:01:14 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.123 15:01:14 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63150 00:05:36.123 killing process with pid 63150 00:05:36.123 15:01:14 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.123 15:01:14 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.123 15:01:14 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63150' 00:05:36.123 15:01:14 alias_rpc -- common/autotest_common.sh@967 -- # kill 63150 00:05:36.123 15:01:14 alias_rpc -- common/autotest_common.sh@972 -- # wait 63150 00:05:38.680 ************************************ 00:05:38.680 END TEST alias_rpc 00:05:38.680 ************************************ 00:05:38.680 00:05:38.680 real 0m4.528s 00:05:38.680 user 0m4.535s 00:05:38.680 sys 0m0.517s 00:05:38.680 15:01:16 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.680 15:01:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.939 15:01:16 -- common/autotest_common.sh@1142 -- # return 0 00:05:38.939 15:01:16 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:38.939 15:01:16 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:38.939 15:01:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.939 15:01:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.939 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:05:38.939 ************************************ 00:05:38.939 START TEST spdkcli_tcp 00:05:38.939 ************************************ 00:05:38.939 15:01:16 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:38.939 * Looking for test storage... 00:05:38.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:38.940 15:01:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:38.940 15:01:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:38.940 15:01:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:38.940 15:01:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:38.940 15:01:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:38.940 15:01:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:38.940 15:01:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:38.940 15:01:16 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.940 15:01:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.940 15:01:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=63249 00:05:38.940 15:01:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:38.940 15:01:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 63249 00:05:38.940 15:01:16 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 63249 ']' 00:05:38.940 15:01:16 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.940 15:01:16 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.940 15:01:16 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.940 15:01:16 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.940 15:01:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.199 [2024-07-15 15:01:17.075202] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:05:39.199 [2024-07-15 15:01:17.075405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63249 ] 00:05:39.199 [2024-07-15 15:01:17.242442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.457 [2024-07-15 15:01:17.519700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.457 [2024-07-15 15:01:17.519734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.393 15:01:18 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.393 15:01:18 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:40.393 15:01:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=63277 00:05:40.393 15:01:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:40.393 15:01:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:40.651 [ 00:05:40.651 "bdev_malloc_delete", 00:05:40.651 "bdev_malloc_create", 00:05:40.651 "bdev_null_resize", 00:05:40.651 "bdev_null_delete", 00:05:40.651 "bdev_null_create", 00:05:40.651 "bdev_nvme_cuse_unregister", 00:05:40.651 "bdev_nvme_cuse_register", 00:05:40.651 "bdev_opal_new_user", 00:05:40.651 "bdev_opal_set_lock_state", 00:05:40.651 "bdev_opal_delete", 00:05:40.651 "bdev_opal_get_info", 00:05:40.651 "bdev_opal_create", 00:05:40.651 "bdev_nvme_opal_revert", 00:05:40.651 "bdev_nvme_opal_init", 00:05:40.651 "bdev_nvme_send_cmd", 00:05:40.651 "bdev_nvme_get_path_iostat", 00:05:40.651 "bdev_nvme_get_mdns_discovery_info", 00:05:40.651 "bdev_nvme_stop_mdns_discovery", 00:05:40.651 "bdev_nvme_start_mdns_discovery", 00:05:40.651 "bdev_nvme_set_multipath_policy", 00:05:40.651 "bdev_nvme_set_preferred_path", 00:05:40.651 "bdev_nvme_get_io_paths", 00:05:40.651 "bdev_nvme_remove_error_injection", 00:05:40.651 "bdev_nvme_add_error_injection", 00:05:40.651 "bdev_nvme_get_discovery_info", 00:05:40.651 "bdev_nvme_stop_discovery", 00:05:40.651 "bdev_nvme_start_discovery", 00:05:40.651 "bdev_nvme_get_controller_health_info", 00:05:40.651 "bdev_nvme_disable_controller", 00:05:40.651 "bdev_nvme_enable_controller", 00:05:40.651 "bdev_nvme_reset_controller", 00:05:40.651 "bdev_nvme_get_transport_statistics", 00:05:40.651 "bdev_nvme_apply_firmware", 00:05:40.651 "bdev_nvme_detach_controller", 00:05:40.651 "bdev_nvme_get_controllers", 00:05:40.651 "bdev_nvme_attach_controller", 00:05:40.651 "bdev_nvme_set_hotplug", 00:05:40.651 "bdev_nvme_set_options", 00:05:40.651 "bdev_passthru_delete", 00:05:40.651 "bdev_passthru_create", 00:05:40.651 "bdev_lvol_set_parent_bdev", 00:05:40.651 "bdev_lvol_set_parent", 00:05:40.651 "bdev_lvol_check_shallow_copy", 00:05:40.651 "bdev_lvol_start_shallow_copy", 00:05:40.651 "bdev_lvol_grow_lvstore", 00:05:40.651 "bdev_lvol_get_lvols", 00:05:40.651 "bdev_lvol_get_lvstores", 00:05:40.651 "bdev_lvol_delete", 00:05:40.651 "bdev_lvol_set_read_only", 00:05:40.651 "bdev_lvol_resize", 00:05:40.651 "bdev_lvol_decouple_parent", 00:05:40.651 "bdev_lvol_inflate", 00:05:40.651 "bdev_lvol_rename", 00:05:40.651 "bdev_lvol_clone_bdev", 00:05:40.651 "bdev_lvol_clone", 00:05:40.651 "bdev_lvol_snapshot", 00:05:40.651 "bdev_lvol_create", 00:05:40.652 "bdev_lvol_delete_lvstore", 00:05:40.652 "bdev_lvol_rename_lvstore", 00:05:40.652 "bdev_lvol_create_lvstore", 00:05:40.652 "bdev_raid_set_options", 00:05:40.652 "bdev_raid_remove_base_bdev", 00:05:40.652 "bdev_raid_add_base_bdev", 00:05:40.652 "bdev_raid_delete", 00:05:40.652 "bdev_raid_create", 00:05:40.652 "bdev_raid_get_bdevs", 00:05:40.652 "bdev_error_inject_error", 00:05:40.652 "bdev_error_delete", 00:05:40.652 "bdev_error_create", 00:05:40.652 "bdev_split_delete", 00:05:40.652 "bdev_split_create", 00:05:40.652 "bdev_delay_delete", 00:05:40.652 "bdev_delay_create", 00:05:40.652 "bdev_delay_update_latency", 00:05:40.652 "bdev_zone_block_delete", 00:05:40.652 "bdev_zone_block_create", 00:05:40.652 "blobfs_create", 00:05:40.652 "blobfs_detect", 00:05:40.652 "blobfs_set_cache_size", 00:05:40.652 "bdev_xnvme_delete", 00:05:40.652 "bdev_xnvme_create", 00:05:40.652 "bdev_aio_delete", 00:05:40.652 "bdev_aio_rescan", 00:05:40.652 "bdev_aio_create", 00:05:40.652 "bdev_ftl_set_property", 00:05:40.652 "bdev_ftl_get_properties", 00:05:40.652 "bdev_ftl_get_stats", 00:05:40.652 "bdev_ftl_unmap", 00:05:40.652 "bdev_ftl_unload", 00:05:40.652 "bdev_ftl_delete", 00:05:40.652 "bdev_ftl_load", 00:05:40.652 "bdev_ftl_create", 00:05:40.652 "bdev_virtio_attach_controller", 00:05:40.652 "bdev_virtio_scsi_get_devices", 00:05:40.652 "bdev_virtio_detach_controller", 00:05:40.652 "bdev_virtio_blk_set_hotplug", 00:05:40.652 "bdev_iscsi_delete", 00:05:40.652 "bdev_iscsi_create", 00:05:40.652 "bdev_iscsi_set_options", 00:05:40.652 "accel_error_inject_error", 00:05:40.652 "ioat_scan_accel_module", 00:05:40.652 "dsa_scan_accel_module", 00:05:40.652 "iaa_scan_accel_module", 00:05:40.652 "keyring_file_remove_key", 00:05:40.652 "keyring_file_add_key", 00:05:40.652 "keyring_linux_set_options", 00:05:40.652 "iscsi_get_histogram", 00:05:40.652 "iscsi_enable_histogram", 00:05:40.652 "iscsi_set_options", 00:05:40.652 "iscsi_get_auth_groups", 00:05:40.652 "iscsi_auth_group_remove_secret", 00:05:40.652 "iscsi_auth_group_add_secret", 00:05:40.652 "iscsi_delete_auth_group", 00:05:40.652 "iscsi_create_auth_group", 00:05:40.652 "iscsi_set_discovery_auth", 00:05:40.652 "iscsi_get_options", 00:05:40.652 "iscsi_target_node_request_logout", 00:05:40.652 "iscsi_target_node_set_redirect", 00:05:40.652 "iscsi_target_node_set_auth", 00:05:40.652 "iscsi_target_node_add_lun", 00:05:40.652 "iscsi_get_stats", 00:05:40.652 "iscsi_get_connections", 00:05:40.652 "iscsi_portal_group_set_auth", 00:05:40.652 "iscsi_start_portal_group", 00:05:40.652 "iscsi_delete_portal_group", 00:05:40.652 "iscsi_create_portal_group", 00:05:40.652 "iscsi_get_portal_groups", 00:05:40.652 "iscsi_delete_target_node", 00:05:40.652 "iscsi_target_node_remove_pg_ig_maps", 00:05:40.652 "iscsi_target_node_add_pg_ig_maps", 00:05:40.652 "iscsi_create_target_node", 00:05:40.652 "iscsi_get_target_nodes", 00:05:40.652 "iscsi_delete_initiator_group", 00:05:40.652 "iscsi_initiator_group_remove_initiators", 00:05:40.652 "iscsi_initiator_group_add_initiators", 00:05:40.652 "iscsi_create_initiator_group", 00:05:40.652 "iscsi_get_initiator_groups", 00:05:40.652 "nvmf_set_crdt", 00:05:40.652 "nvmf_set_config", 00:05:40.652 "nvmf_set_max_subsystems", 00:05:40.652 "nvmf_stop_mdns_prr", 00:05:40.652 "nvmf_publish_mdns_prr", 00:05:40.652 "nvmf_subsystem_get_listeners", 00:05:40.652 "nvmf_subsystem_get_qpairs", 00:05:40.652 "nvmf_subsystem_get_controllers", 00:05:40.652 "nvmf_get_stats", 00:05:40.652 "nvmf_get_transports", 00:05:40.652 "nvmf_create_transport", 00:05:40.652 "nvmf_get_targets", 00:05:40.652 "nvmf_delete_target", 00:05:40.652 "nvmf_create_target", 00:05:40.652 "nvmf_subsystem_allow_any_host", 00:05:40.652 "nvmf_subsystem_remove_host", 00:05:40.652 "nvmf_subsystem_add_host", 00:05:40.652 "nvmf_ns_remove_host", 00:05:40.652 "nvmf_ns_add_host", 00:05:40.652 "nvmf_subsystem_remove_ns", 00:05:40.652 "nvmf_subsystem_add_ns", 00:05:40.652 "nvmf_subsystem_listener_set_ana_state", 00:05:40.652 "nvmf_discovery_get_referrals", 00:05:40.652 "nvmf_discovery_remove_referral", 00:05:40.652 "nvmf_discovery_add_referral", 00:05:40.652 "nvmf_subsystem_remove_listener", 00:05:40.652 "nvmf_subsystem_add_listener", 00:05:40.652 "nvmf_delete_subsystem", 00:05:40.652 "nvmf_create_subsystem", 00:05:40.652 "nvmf_get_subsystems", 00:05:40.652 "env_dpdk_get_mem_stats", 00:05:40.652 "nbd_get_disks", 00:05:40.652 "nbd_stop_disk", 00:05:40.652 "nbd_start_disk", 00:05:40.652 "ublk_recover_disk", 00:05:40.652 "ublk_get_disks", 00:05:40.652 "ublk_stop_disk", 00:05:40.652 "ublk_start_disk", 00:05:40.652 "ublk_destroy_target", 00:05:40.652 "ublk_create_target", 00:05:40.652 "virtio_blk_create_transport", 00:05:40.652 "virtio_blk_get_transports", 00:05:40.652 "vhost_controller_set_coalescing", 00:05:40.652 "vhost_get_controllers", 00:05:40.652 "vhost_delete_controller", 00:05:40.652 "vhost_create_blk_controller", 00:05:40.652 "vhost_scsi_controller_remove_target", 00:05:40.652 "vhost_scsi_controller_add_target", 00:05:40.652 "vhost_start_scsi_controller", 00:05:40.652 "vhost_create_scsi_controller", 00:05:40.652 "thread_set_cpumask", 00:05:40.652 "framework_get_governor", 00:05:40.652 "framework_get_scheduler", 00:05:40.652 "framework_set_scheduler", 00:05:40.652 "framework_get_reactors", 00:05:40.652 "thread_get_io_channels", 00:05:40.652 "thread_get_pollers", 00:05:40.652 "thread_get_stats", 00:05:40.652 "framework_monitor_context_switch", 00:05:40.652 "spdk_kill_instance", 00:05:40.652 "log_enable_timestamps", 00:05:40.652 "log_get_flags", 00:05:40.652 "log_clear_flag", 00:05:40.652 "log_set_flag", 00:05:40.652 "log_get_level", 00:05:40.652 "log_set_level", 00:05:40.652 "log_get_print_level", 00:05:40.652 "log_set_print_level", 00:05:40.652 "framework_enable_cpumask_locks", 00:05:40.652 "framework_disable_cpumask_locks", 00:05:40.652 "framework_wait_init", 00:05:40.652 "framework_start_init", 00:05:40.652 "scsi_get_devices", 00:05:40.652 "bdev_get_histogram", 00:05:40.652 "bdev_enable_histogram", 00:05:40.652 "bdev_set_qos_limit", 00:05:40.652 "bdev_set_qd_sampling_period", 00:05:40.652 "bdev_get_bdevs", 00:05:40.652 "bdev_reset_iostat", 00:05:40.652 "bdev_get_iostat", 00:05:40.652 "bdev_examine", 00:05:40.652 "bdev_wait_for_examine", 00:05:40.652 "bdev_set_options", 00:05:40.652 "notify_get_notifications", 00:05:40.652 "notify_get_types", 00:05:40.652 "accel_get_stats", 00:05:40.652 "accel_set_options", 00:05:40.652 "accel_set_driver", 00:05:40.652 "accel_crypto_key_destroy", 00:05:40.652 "accel_crypto_keys_get", 00:05:40.652 "accel_crypto_key_create", 00:05:40.652 "accel_assign_opc", 00:05:40.652 "accel_get_module_info", 00:05:40.652 "accel_get_opc_assignments", 00:05:40.652 "vmd_rescan", 00:05:40.652 "vmd_remove_device", 00:05:40.652 "vmd_enable", 00:05:40.652 "sock_get_default_impl", 00:05:40.652 "sock_set_default_impl", 00:05:40.652 "sock_impl_set_options", 00:05:40.652 "sock_impl_get_options", 00:05:40.652 "iobuf_get_stats", 00:05:40.652 "iobuf_set_options", 00:05:40.652 "framework_get_pci_devices", 00:05:40.652 "framework_get_config", 00:05:40.652 "framework_get_subsystems", 00:05:40.652 "trace_get_info", 00:05:40.652 "trace_get_tpoint_group_mask", 00:05:40.652 "trace_disable_tpoint_group", 00:05:40.652 "trace_enable_tpoint_group", 00:05:40.652 "trace_clear_tpoint_mask", 00:05:40.652 "trace_set_tpoint_mask", 00:05:40.652 "keyring_get_keys", 00:05:40.652 "spdk_get_version", 00:05:40.652 "rpc_get_methods" 00:05:40.652 ] 00:05:40.652 15:01:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:40.652 15:01:18 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:40.652 15:01:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.652 15:01:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:40.652 15:01:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 63249 00:05:40.652 15:01:18 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 63249 ']' 00:05:40.652 15:01:18 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 63249 00:05:40.652 15:01:18 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:40.652 15:01:18 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.652 15:01:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63249 00:05:40.911 killing process with pid 63249 00:05:40.911 15:01:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.911 15:01:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.911 15:01:18 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63249' 00:05:40.911 15:01:18 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 63249 00:05:40.911 15:01:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 63249 00:05:43.470 ************************************ 00:05:43.470 END TEST spdkcli_tcp 00:05:43.470 ************************************ 00:05:43.470 00:05:43.470 real 0m4.612s 00:05:43.470 user 0m8.012s 00:05:43.470 sys 0m0.602s 00:05:43.470 15:01:21 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.470 15:01:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.470 15:01:21 -- common/autotest_common.sh@1142 -- # return 0 00:05:43.470 15:01:21 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.470 15:01:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.470 15:01:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.470 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:43.470 ************************************ 00:05:43.470 START TEST dpdk_mem_utility 00:05:43.470 ************************************ 00:05:43.470 15:01:21 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.730 * Looking for test storage... 00:05:43.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:43.730 15:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:43.730 15:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63372 00:05:43.730 15:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.730 15:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63372 00:05:43.730 15:01:21 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 63372 ']' 00:05:43.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.730 15:01:21 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.730 15:01:21 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.730 15:01:21 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.730 15:01:21 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.730 15:01:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:43.730 [2024-07-15 15:01:21.725733] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:05:43.730 [2024-07-15 15:01:21.725863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63372 ] 00:05:43.989 [2024-07-15 15:01:21.891413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.248 [2024-07-15 15:01:22.143087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.186 15:01:23 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.186 15:01:23 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:45.186 15:01:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:45.186 15:01:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:45.186 15:01:23 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.186 15:01:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.186 { 00:05:45.186 "filename": "/tmp/spdk_mem_dump.txt" 00:05:45.186 } 00:05:45.186 15:01:23 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.186 15:01:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:45.186 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:45.186 1 heaps totaling size 820.000000 MiB 00:05:45.186 size: 820.000000 MiB heap id: 0 00:05:45.186 end heaps---------- 00:05:45.186 8 mempools totaling size 598.116089 MiB 00:05:45.186 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:45.186 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:45.186 size: 84.521057 MiB name: bdev_io_63372 00:05:45.186 size: 51.011292 MiB name: evtpool_63372 00:05:45.186 size: 50.003479 MiB name: msgpool_63372 00:05:45.186 size: 21.763794 MiB name: PDU_Pool 00:05:45.186 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:45.186 size: 0.026123 MiB name: Session_Pool 00:05:45.186 end mempools------- 00:05:45.186 6 memzones totaling size 4.142822 MiB 00:05:45.186 size: 1.000366 MiB name: RG_ring_0_63372 00:05:45.186 size: 1.000366 MiB name: RG_ring_1_63372 00:05:45.186 size: 1.000366 MiB name: RG_ring_4_63372 00:05:45.186 size: 1.000366 MiB name: RG_ring_5_63372 00:05:45.186 size: 0.125366 MiB name: RG_ring_2_63372 00:05:45.186 size: 0.015991 MiB name: RG_ring_3_63372 00:05:45.186 end memzones------- 00:05:45.186 15:01:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:45.186 heap id: 0 total size: 820.000000 MiB number of busy elements: 297 number of free elements: 18 00:05:45.186 list of free elements. size: 18.452271 MiB 00:05:45.186 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:45.186 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:45.186 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:45.186 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:45.186 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:45.186 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:45.186 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:45.186 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:45.186 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:45.186 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:45.186 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:45.186 element at address: 0x200000200000 with size: 0.830200 MiB 00:05:45.186 element at address: 0x20001b000000 with size: 0.564880 MiB 00:05:45.186 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:45.186 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:45.186 element at address: 0x200013800000 with size: 0.467651 MiB 00:05:45.186 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:45.186 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:45.186 list of standard malloc elements. size: 199.283325 MiB 00:05:45.186 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:45.186 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:45.186 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:45.186 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:45.186 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:45.186 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:45.186 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:45.186 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:45.186 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:45.186 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:45.186 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:45.186 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:45.186 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:45.186 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:45.186 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:45.186 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:45.186 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:45.186 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:45.186 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:45.186 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:45.186 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:45.187 element at address: 0x200013877b80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:45.187 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:45.187 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:45.187 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:45.187 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:45.187 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:45.187 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:45.187 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:45.187 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:45.188 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:45.188 list of memzone associated elements. size: 602.264404 MiB 00:05:45.188 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:45.188 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:45.188 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:45.188 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:45.188 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:45.188 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63372_0 00:05:45.188 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:45.188 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63372_0 00:05:45.188 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:45.188 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63372_0 00:05:45.188 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:45.188 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:45.188 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:45.188 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:45.188 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:45.188 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63372 00:05:45.188 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:45.188 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63372 00:05:45.188 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:45.188 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63372 00:05:45.188 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:45.188 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:45.188 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:45.188 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:45.188 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:45.188 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:45.188 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:45.188 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:45.188 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:45.188 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63372 00:05:45.188 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:45.188 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63372 00:05:45.188 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:45.188 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63372 00:05:45.188 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:45.188 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63372 00:05:45.188 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:45.188 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63372 00:05:45.188 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:45.188 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:45.188 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:45.188 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:45.188 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:45.188 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:45.188 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:45.188 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63372 00:05:45.188 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:45.188 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:45.188 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:45.188 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:45.188 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:45.188 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63372 00:05:45.188 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:45.188 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:45.188 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:45.188 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63372 00:05:45.188 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:45.188 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63372 00:05:45.188 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:45.188 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:45.448 15:01:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:45.448 15:01:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63372 00:05:45.448 15:01:23 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 63372 ']' 00:05:45.448 15:01:23 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 63372 00:05:45.448 15:01:23 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:45.448 15:01:23 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.448 15:01:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63372 00:05:45.448 killing process with pid 63372 00:05:45.448 15:01:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.448 15:01:23 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.448 15:01:23 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63372' 00:05:45.448 15:01:23 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 63372 00:05:45.448 15:01:23 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 63372 00:05:47.987 ************************************ 00:05:47.987 END TEST dpdk_mem_utility 00:05:47.987 ************************************ 00:05:47.987 00:05:47.987 real 0m4.424s 00:05:47.987 user 0m4.363s 00:05:47.987 sys 0m0.534s 00:05:47.987 15:01:25 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.987 15:01:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.988 15:01:25 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.988 15:01:25 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:47.988 15:01:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.988 15:01:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.988 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:47.988 ************************************ 00:05:47.988 START TEST event 00:05:47.988 ************************************ 00:05:47.988 15:01:25 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:48.247 * Looking for test storage... 00:05:48.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:48.247 15:01:26 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:48.247 15:01:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:48.247 15:01:26 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.247 15:01:26 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:48.247 15:01:26 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.247 15:01:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.247 ************************************ 00:05:48.247 START TEST event_perf 00:05:48.247 ************************************ 00:05:48.247 15:01:26 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.247 Running I/O for 1 seconds...[2024-07-15 15:01:26.178851] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:05:48.247 [2024-07-15 15:01:26.179048] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63476 ] 00:05:48.247 [2024-07-15 15:01:26.347384] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.505 [2024-07-15 15:01:26.611693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.505 [2024-07-15 15:01:26.611861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.505 [2024-07-15 15:01:26.611946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.505 Running I/O for 1 seconds...[2024-07-15 15:01:26.611981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.959 00:05:49.959 lcore 0: 184769 00:05:49.959 lcore 1: 184768 00:05:49.959 lcore 2: 184767 00:05:49.959 lcore 3: 184767 00:05:49.959 done. 00:05:50.217 00:05:50.217 real 0m1.947s 00:05:50.217 user 0m4.680s 00:05:50.217 sys 0m0.140s 00:05:50.217 15:01:28 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.217 15:01:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.217 ************************************ 00:05:50.217 END TEST event_perf 00:05:50.217 ************************************ 00:05:50.217 15:01:28 event -- common/autotest_common.sh@1142 -- # return 0 00:05:50.217 15:01:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:50.217 15:01:28 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:50.217 15:01:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.217 15:01:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.217 ************************************ 00:05:50.217 START TEST event_reactor 00:05:50.217 ************************************ 00:05:50.217 15:01:28 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:50.217 [2024-07-15 15:01:28.186458] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:05:50.217 [2024-07-15 15:01:28.186561] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63521 ] 00:05:50.476 [2024-07-15 15:01:28.351643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.735 [2024-07-15 15:01:28.591738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.109 test_start 00:05:52.109 oneshot 00:05:52.109 tick 100 00:05:52.109 tick 100 00:05:52.109 tick 250 00:05:52.109 tick 100 00:05:52.109 tick 100 00:05:52.109 tick 100 00:05:52.109 tick 250 00:05:52.109 tick 500 00:05:52.109 tick 100 00:05:52.109 tick 100 00:05:52.109 tick 250 00:05:52.109 tick 100 00:05:52.109 tick 100 00:05:52.109 test_end 00:05:52.109 00:05:52.109 real 0m1.889s 00:05:52.109 user 0m1.676s 00:05:52.109 sys 0m0.103s 00:05:52.109 15:01:30 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.109 15:01:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:52.109 ************************************ 00:05:52.109 END TEST event_reactor 00:05:52.109 ************************************ 00:05:52.109 15:01:30 event -- common/autotest_common.sh@1142 -- # return 0 00:05:52.109 15:01:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.109 15:01:30 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:52.109 15:01:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.109 15:01:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.109 ************************************ 00:05:52.109 START TEST event_reactor_perf 00:05:52.109 ************************************ 00:05:52.109 15:01:30 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.109 [2024-07-15 15:01:30.141204] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:05:52.109 [2024-07-15 15:01:30.141338] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63558 ] 00:05:52.367 [2024-07-15 15:01:30.306840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.626 [2024-07-15 15:01:30.547849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.056 test_start 00:05:54.056 test_end 00:05:54.056 Performance: 345762 events per second 00:05:54.056 ************************************ 00:05:54.056 END TEST event_reactor_perf 00:05:54.056 00:05:54.056 real 0m1.909s 00:05:54.056 user 0m1.679s 00:05:54.056 sys 0m0.119s 00:05:54.056 15:01:32 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.056 15:01:32 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.056 ************************************ 00:05:54.056 15:01:32 event -- common/autotest_common.sh@1142 -- # return 0 00:05:54.056 15:01:32 event -- event/event.sh@49 -- # uname -s 00:05:54.056 15:01:32 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:54.056 15:01:32 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:54.056 15:01:32 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.056 15:01:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.056 15:01:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.056 ************************************ 00:05:54.056 START TEST event_scheduler 00:05:54.056 ************************************ 00:05:54.056 15:01:32 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:54.314 * Looking for test storage... 00:05:54.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:54.314 15:01:32 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:54.314 15:01:32 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63626 00:05:54.314 15:01:32 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:54.314 15:01:32 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.314 15:01:32 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63626 00:05:54.314 15:01:32 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 63626 ']' 00:05:54.314 15:01:32 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.314 15:01:32 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.314 15:01:32 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.314 15:01:32 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.314 15:01:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.315 [2024-07-15 15:01:32.266896] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:05:54.315 [2024-07-15 15:01:32.267096] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63626 ] 00:05:54.574 [2024-07-15 15:01:32.435494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.834 [2024-07-15 15:01:32.684266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.834 [2024-07-15 15:01:32.684587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.834 [2024-07-15 15:01:32.684645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.834 [2024-07-15 15:01:32.684447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.094 15:01:33 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.094 15:01:33 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:55.094 15:01:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:55.094 15:01:33 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.094 15:01:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.094 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.094 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.094 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.094 POWER: Cannot set governor of lcore 0 to performance 00:05:55.094 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.094 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.094 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.094 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.094 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:55.094 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:55.094 POWER: Unable to set Power Management Environment for lcore 0 00:05:55.094 [2024-07-15 15:01:33.142818] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:55.094 [2024-07-15 15:01:33.142869] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:55.094 [2024-07-15 15:01:33.142959] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:55.094 [2024-07-15 15:01:33.143089] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:55.094 [2024-07-15 15:01:33.143224] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:55.094 [2024-07-15 15:01:33.143307] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:55.094 15:01:33 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.094 15:01:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:55.094 15:01:33 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.094 15:01:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.662 [2024-07-15 15:01:33.532922] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:55.662 15:01:33 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.662 15:01:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:55.662 15:01:33 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.662 15:01:33 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.662 15:01:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.662 ************************************ 00:05:55.662 START TEST scheduler_create_thread 00:05:55.662 ************************************ 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.662 2 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.662 3 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.662 4 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.662 5 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.662 6 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.662 7 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.662 8 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.662 9 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.662 10 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.662 15:01:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.040 15:01:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.040 15:01:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:57.040 15:01:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:57.040 15:01:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.040 15:01:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.979 15:01:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.979 15:01:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:57.979 15:01:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.979 15:01:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.918 15:01:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.918 15:01:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.918 15:01:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.918 15:01:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.918 15:01:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.487 ************************************ 00:05:59.487 END TEST scheduler_create_thread 00:05:59.487 ************************************ 00:05:59.487 15:01:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.487 00:05:59.487 real 0m3.885s 00:05:59.487 user 0m0.031s 00:05:59.487 sys 0m0.004s 00:05:59.487 15:01:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.487 15:01:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.487 15:01:37 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:59.487 15:01:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:59.487 15:01:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63626 00:05:59.487 15:01:37 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 63626 ']' 00:05:59.487 15:01:37 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 63626 00:05:59.487 15:01:37 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:59.487 15:01:37 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.487 15:01:37 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63626 00:05:59.487 killing process with pid 63626 00:05:59.487 15:01:37 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:59.487 15:01:37 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:59.487 15:01:37 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63626' 00:05:59.487 15:01:37 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 63626 00:05:59.487 15:01:37 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 63626 00:05:59.749 [2024-07-15 15:01:37.811700] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:01.175 00:06:01.175 real 0m7.194s 00:06:01.175 user 0m14.826s 00:06:01.175 sys 0m0.492s 00:06:01.175 15:01:39 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.175 ************************************ 00:06:01.175 END TEST event_scheduler 00:06:01.175 ************************************ 00:06:01.175 15:01:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:01.437 15:01:39 event -- common/autotest_common.sh@1142 -- # return 0 00:06:01.437 15:01:39 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:01.437 15:01:39 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:01.437 15:01:39 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.437 15:01:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.437 15:01:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.437 ************************************ 00:06:01.437 START TEST app_repeat 00:06:01.437 ************************************ 00:06:01.437 15:01:39 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:01.437 15:01:39 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.437 15:01:39 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.438 15:01:39 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:01.438 15:01:39 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.438 15:01:39 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:01.438 15:01:39 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:01.438 15:01:39 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:01.438 15:01:39 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:01.438 15:01:39 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63756 00:06:01.438 15:01:39 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.438 15:01:39 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63756' 00:06:01.438 Process app_repeat pid: 63756 00:06:01.438 15:01:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:01.438 15:01:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:01.438 spdk_app_start Round 0 00:06:01.438 15:01:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63756 /var/tmp/spdk-nbd.sock 00:06:01.438 15:01:39 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63756 ']' 00:06:01.438 15:01:39 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.438 15:01:39 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.438 15:01:39 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.438 15:01:39 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.438 15:01:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.438 [2024-07-15 15:01:39.398147] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:06:01.438 [2024-07-15 15:01:39.398701] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63756 ] 00:06:01.699 [2024-07-15 15:01:39.585044] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.958 [2024-07-15 15:01:39.866005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.958 [2024-07-15 15:01:39.866073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.217 15:01:40 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.217 15:01:40 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:02.217 15:01:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.787 Malloc0 00:06:02.788 15:01:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.047 Malloc1 00:06:03.047 15:01:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.047 15:01:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.047 15:01:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.047 15:01:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.047 15:01:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.047 15:01:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.047 15:01:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.047 15:01:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.047 15:01:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.047 15:01:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.047 15:01:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.047 15:01:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.047 15:01:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.047 15:01:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.047 15:01:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.047 15:01:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.306 /dev/nbd0 00:06:03.306 15:01:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.306 15:01:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.306 15:01:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:03.306 15:01:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:03.306 15:01:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:03.306 15:01:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:03.306 15:01:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:03.306 15:01:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:03.306 15:01:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:03.306 15:01:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:03.306 15:01:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.306 1+0 records in 00:06:03.306 1+0 records out 00:06:03.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286127 s, 14.3 MB/s 00:06:03.306 15:01:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.306 15:01:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:03.306 15:01:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.306 15:01:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:03.306 15:01:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:03.306 15:01:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.306 15:01:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.306 15:01:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.306 /dev/nbd1 00:06:03.566 15:01:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.566 15:01:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.566 15:01:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:03.566 15:01:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:03.566 15:01:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:03.566 15:01:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:03.566 15:01:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:03.566 15:01:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:03.566 15:01:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:03.566 15:01:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:03.566 15:01:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.566 1+0 records in 00:06:03.566 1+0 records out 00:06:03.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655648 s, 6.2 MB/s 00:06:03.566 15:01:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.566 15:01:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:03.566 15:01:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.566 15:01:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:03.566 15:01:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:03.566 15:01:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.566 15:01:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.566 15:01:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.566 15:01:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.566 15:01:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.566 15:01:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.566 { 00:06:03.566 "nbd_device": "/dev/nbd0", 00:06:03.566 "bdev_name": "Malloc0" 00:06:03.566 }, 00:06:03.566 { 00:06:03.566 "nbd_device": "/dev/nbd1", 00:06:03.566 "bdev_name": "Malloc1" 00:06:03.566 } 00:06:03.566 ]' 00:06:03.566 15:01:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.566 15:01:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.566 { 00:06:03.566 "nbd_device": "/dev/nbd0", 00:06:03.566 "bdev_name": "Malloc0" 00:06:03.566 }, 00:06:03.566 { 00:06:03.566 "nbd_device": "/dev/nbd1", 00:06:03.566 "bdev_name": "Malloc1" 00:06:03.566 } 00:06:03.566 ]' 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.824 /dev/nbd1' 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.824 /dev/nbd1' 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.824 256+0 records in 00:06:03.824 256+0 records out 00:06:03.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133256 s, 78.7 MB/s 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.824 256+0 records in 00:06:03.824 256+0 records out 00:06:03.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265628 s, 39.5 MB/s 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.824 256+0 records in 00:06:03.824 256+0 records out 00:06:03.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253738 s, 41.3 MB/s 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.824 15:01:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.083 15:01:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.083 15:01:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.083 15:01:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.083 15:01:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.083 15:01:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.083 15:01:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.083 15:01:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.083 15:01:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.083 15:01:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.083 15:01:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.341 15:01:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.341 15:01:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.341 15:01:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.341 15:01:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.341 15:01:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.341 15:01:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.341 15:01:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.341 15:01:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.341 15:01:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.341 15:01:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.341 15:01:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.600 15:01:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.600 15:01:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.600 15:01:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.600 15:01:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.600 15:01:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.600 15:01:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.600 15:01:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.600 15:01:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.600 15:01:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.600 15:01:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.600 15:01:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.600 15:01:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.600 15:01:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.168 15:01:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.544 [2024-07-15 15:01:44.533911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.803 [2024-07-15 15:01:44.785637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.803 [2024-07-15 15:01:44.785637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.061 [2024-07-15 15:01:45.033651] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.061 [2024-07-15 15:01:45.033714] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.997 spdk_app_start Round 1 00:06:07.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.997 15:01:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.997 15:01:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:07.997 15:01:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63756 /var/tmp/spdk-nbd.sock 00:06:07.997 15:01:46 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63756 ']' 00:06:07.997 15:01:46 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.997 15:01:46 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.997 15:01:46 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.997 15:01:46 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.997 15:01:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.256 15:01:46 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.256 15:01:46 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:08.256 15:01:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.514 Malloc0 00:06:08.514 15:01:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.772 Malloc1 00:06:08.772 15:01:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.772 15:01:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.772 15:01:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.772 15:01:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.772 15:01:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.772 15:01:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.772 15:01:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.772 15:01:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.772 15:01:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.772 15:01:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.772 15:01:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.772 15:01:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.772 15:01:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.772 15:01:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.772 15:01:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.772 15:01:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.031 /dev/nbd0 00:06:09.031 15:01:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.031 15:01:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.031 15:01:47 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:09.031 15:01:47 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:09.031 15:01:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:09.031 15:01:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:09.031 15:01:47 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:09.031 15:01:47 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:09.031 15:01:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:09.031 15:01:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:09.031 15:01:47 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.031 1+0 records in 00:06:09.031 1+0 records out 00:06:09.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559896 s, 7.3 MB/s 00:06:09.031 15:01:47 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.031 15:01:47 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:09.031 15:01:47 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.031 15:01:47 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:09.031 15:01:47 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:09.031 15:01:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.031 15:01:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.031 15:01:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.290 /dev/nbd1 00:06:09.290 15:01:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.290 15:01:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.290 15:01:47 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:09.290 15:01:47 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:09.290 15:01:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:09.290 15:01:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:09.290 15:01:47 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:09.290 15:01:47 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:09.290 15:01:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:09.290 15:01:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:09.290 15:01:47 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.290 1+0 records in 00:06:09.290 1+0 records out 00:06:09.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660794 s, 6.2 MB/s 00:06:09.290 15:01:47 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.290 15:01:47 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:09.290 15:01:47 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.290 15:01:47 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:09.290 15:01:47 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:09.290 15:01:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.290 15:01:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.290 15:01:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.290 15:01:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.290 15:01:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.550 { 00:06:09.550 "nbd_device": "/dev/nbd0", 00:06:09.550 "bdev_name": "Malloc0" 00:06:09.550 }, 00:06:09.550 { 00:06:09.550 "nbd_device": "/dev/nbd1", 00:06:09.550 "bdev_name": "Malloc1" 00:06:09.550 } 00:06:09.550 ]' 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.550 { 00:06:09.550 "nbd_device": "/dev/nbd0", 00:06:09.550 "bdev_name": "Malloc0" 00:06:09.550 }, 00:06:09.550 { 00:06:09.550 "nbd_device": "/dev/nbd1", 00:06:09.550 "bdev_name": "Malloc1" 00:06:09.550 } 00:06:09.550 ]' 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.550 /dev/nbd1' 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.550 /dev/nbd1' 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.550 256+0 records in 00:06:09.550 256+0 records out 00:06:09.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126438 s, 82.9 MB/s 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.550 256+0 records in 00:06:09.550 256+0 records out 00:06:09.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233903 s, 44.8 MB/s 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.550 256+0 records in 00:06:09.550 256+0 records out 00:06:09.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027528 s, 38.1 MB/s 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.550 15:01:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.811 15:01:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.811 15:01:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.811 15:01:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.811 15:01:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.811 15:01:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.811 15:01:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.811 15:01:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.811 15:01:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.811 15:01:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.811 15:01:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.071 15:01:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.071 15:01:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.071 15:01:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.071 15:01:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.071 15:01:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.071 15:01:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.071 15:01:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.071 15:01:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.071 15:01:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.071 15:01:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.071 15:01:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.351 15:01:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.351 15:01:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.351 15:01:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.351 15:01:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.351 15:01:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.351 15:01:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.351 15:01:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.351 15:01:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.351 15:01:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.351 15:01:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.351 15:01:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.351 15:01:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.351 15:01:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.918 15:01:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.294 [2024-07-15 15:01:50.349642] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.553 [2024-07-15 15:01:50.615231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.553 [2024-07-15 15:01:50.615250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.811 [2024-07-15 15:01:50.884233] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.811 [2024-07-15 15:01:50.884309] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.788 spdk_app_start Round 2 00:06:13.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.788 15:01:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:13.788 15:01:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:13.788 15:01:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63756 /var/tmp/spdk-nbd.sock 00:06:13.788 15:01:51 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63756 ']' 00:06:13.788 15:01:51 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.788 15:01:51 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.788 15:01:51 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.788 15:01:51 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.788 15:01:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.047 15:01:51 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.047 15:01:51 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:14.047 15:01:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.305 Malloc0 00:06:14.305 15:01:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.564 Malloc1 00:06:14.564 15:01:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.564 15:01:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.564 15:01:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.564 15:01:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.564 15:01:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.564 15:01:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.564 15:01:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.564 15:01:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.564 15:01:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.564 15:01:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.564 15:01:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.564 15:01:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.564 15:01:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:14.564 15:01:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.564 15:01:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.564 15:01:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.823 /dev/nbd0 00:06:14.823 15:01:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.823 15:01:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.823 15:01:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:14.823 15:01:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:14.823 15:01:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:14.823 15:01:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:14.823 15:01:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:14.823 15:01:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:14.823 15:01:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:14.823 15:01:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:14.823 15:01:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.823 1+0 records in 00:06:14.823 1+0 records out 00:06:14.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261835 s, 15.6 MB/s 00:06:14.823 15:01:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.823 15:01:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:14.823 15:01:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.823 15:01:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:14.823 15:01:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:14.823 15:01:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.823 15:01:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.823 15:01:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.081 /dev/nbd1 00:06:15.081 15:01:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.081 15:01:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.081 15:01:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:15.081 15:01:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:15.081 15:01:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:15.081 15:01:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:15.081 15:01:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:15.081 15:01:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:15.081 15:01:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:15.081 15:01:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:15.081 15:01:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.081 1+0 records in 00:06:15.081 1+0 records out 00:06:15.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505485 s, 8.1 MB/s 00:06:15.081 15:01:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.081 15:01:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:15.081 15:01:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.081 15:01:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:15.081 15:01:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:15.081 15:01:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.343 15:01:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.343 15:01:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.343 15:01:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.343 15:01:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.343 15:01:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.343 { 00:06:15.343 "nbd_device": "/dev/nbd0", 00:06:15.343 "bdev_name": "Malloc0" 00:06:15.343 }, 00:06:15.343 { 00:06:15.343 "nbd_device": "/dev/nbd1", 00:06:15.343 "bdev_name": "Malloc1" 00:06:15.343 } 00:06:15.343 ]' 00:06:15.343 15:01:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.343 15:01:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.343 { 00:06:15.343 "nbd_device": "/dev/nbd0", 00:06:15.343 "bdev_name": "Malloc0" 00:06:15.343 }, 00:06:15.343 { 00:06:15.343 "nbd_device": "/dev/nbd1", 00:06:15.343 "bdev_name": "Malloc1" 00:06:15.343 } 00:06:15.343 ]' 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.602 /dev/nbd1' 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.602 /dev/nbd1' 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.602 256+0 records in 00:06:15.602 256+0 records out 00:06:15.602 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00530498 s, 198 MB/s 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.602 256+0 records in 00:06:15.602 256+0 records out 00:06:15.602 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023181 s, 45.2 MB/s 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.602 256+0 records in 00:06:15.602 256+0 records out 00:06:15.602 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236465 s, 44.3 MB/s 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.602 15:01:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.860 15:01:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.860 15:01:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.860 15:01:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.860 15:01:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.860 15:01:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.860 15:01:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.860 15:01:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.860 15:01:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.860 15:01:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.860 15:01:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.120 15:01:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.120 15:01:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.120 15:01:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.120 15:01:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.120 15:01:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.120 15:01:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.120 15:01:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.120 15:01:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.120 15:01:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.120 15:01:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.120 15:01:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.120 15:01:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.120 15:01:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.120 15:01:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.379 15:01:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.379 15:01:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.379 15:01:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.379 15:01:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.379 15:01:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.379 15:01:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.379 15:01:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.379 15:01:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.379 15:01:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.379 15:01:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.671 15:01:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.055 [2024-07-15 15:01:56.127842] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.315 [2024-07-15 15:01:56.366960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.315 [2024-07-15 15:01:56.366962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.575 [2024-07-15 15:01:56.605450] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.575 [2024-07-15 15:01:56.605570] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.958 15:01:57 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63756 /var/tmp/spdk-nbd.sock 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63756 ']' 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:19.958 15:01:57 event.app_repeat -- event/event.sh@39 -- # killprocess 63756 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 63756 ']' 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 63756 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63756 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63756' 00:06:19.958 killing process with pid 63756 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@967 -- # kill 63756 00:06:19.958 15:01:57 event.app_repeat -- common/autotest_common.sh@972 -- # wait 63756 00:06:21.337 spdk_app_start is called in Round 0. 00:06:21.337 Shutdown signal received, stop current app iteration 00:06:21.337 Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 reinitialization... 00:06:21.337 spdk_app_start is called in Round 1. 00:06:21.337 Shutdown signal received, stop current app iteration 00:06:21.337 Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 reinitialization... 00:06:21.337 spdk_app_start is called in Round 2. 00:06:21.337 Shutdown signal received, stop current app iteration 00:06:21.337 Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 reinitialization... 00:06:21.337 spdk_app_start is called in Round 3. 00:06:21.337 Shutdown signal received, stop current app iteration 00:06:21.337 15:01:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:21.337 15:01:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:21.337 00:06:21.337 real 0m19.901s 00:06:21.337 user 0m40.970s 00:06:21.337 sys 0m2.863s 00:06:21.337 15:01:59 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.337 15:01:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.337 ************************************ 00:06:21.337 END TEST app_repeat 00:06:21.337 ************************************ 00:06:21.337 15:01:59 event -- common/autotest_common.sh@1142 -- # return 0 00:06:21.337 15:01:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:21.337 15:01:59 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:21.337 15:01:59 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.337 15:01:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.337 15:01:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.337 ************************************ 00:06:21.337 START TEST cpu_locks 00:06:21.337 ************************************ 00:06:21.337 15:01:59 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:21.337 * Looking for test storage... 00:06:21.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:21.337 15:01:59 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:21.337 15:01:59 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:21.337 15:01:59 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:21.337 15:01:59 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:21.337 15:01:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.337 15:01:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.337 15:01:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.337 ************************************ 00:06:21.337 START TEST default_locks 00:06:21.337 ************************************ 00:06:21.337 15:01:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:21.337 15:01:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=64203 00:06:21.337 15:01:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.337 15:01:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 64203 00:06:21.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.337 15:01:59 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 64203 ']' 00:06:21.337 15:01:59 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.337 15:01:59 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.337 15:01:59 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.337 15:01:59 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.337 15:01:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.597 [2024-07-15 15:01:59.531633] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:06:21.597 [2024-07-15 15:01:59.531794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64203 ] 00:06:21.597 [2024-07-15 15:01:59.682312] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.857 [2024-07-15 15:01:59.935458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.796 15:02:00 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.796 15:02:00 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:22.796 15:02:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 64203 00:06:22.796 15:02:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 64203 00:06:22.796 15:02:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.363 15:02:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 64203 00:06:23.363 15:02:01 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 64203 ']' 00:06:23.363 15:02:01 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 64203 00:06:23.363 15:02:01 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:23.363 15:02:01 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.363 15:02:01 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64203 00:06:23.363 killing process with pid 64203 00:06:23.363 15:02:01 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.363 15:02:01 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.364 15:02:01 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64203' 00:06:23.364 15:02:01 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 64203 00:06:23.364 15:02:01 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 64203 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 64203 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64203 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 64203 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 64203 ']' 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.903 ERROR: process (pid: 64203) is no longer running 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.903 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64203) - No such process 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.903 00:06:25.903 real 0m4.408s 00:06:25.903 user 0m4.311s 00:06:25.903 sys 0m0.654s 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.903 15:02:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.903 ************************************ 00:06:25.903 END TEST default_locks 00:06:25.903 ************************************ 00:06:25.903 15:02:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:25.903 15:02:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:25.903 15:02:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.903 15:02:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.903 15:02:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.903 ************************************ 00:06:25.903 START TEST default_locks_via_rpc 00:06:25.903 ************************************ 00:06:25.903 15:02:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:25.903 15:02:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=64278 00:06:25.903 15:02:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.903 15:02:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 64278 00:06:25.903 15:02:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64278 ']' 00:06:25.903 15:02:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.903 15:02:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.903 15:02:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.903 15:02:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.903 15:02:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.903 [2024-07-15 15:02:03.993523] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:06:25.903 [2024-07-15 15:02:03.993743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64278 ] 00:06:26.162 [2024-07-15 15:02:04.156268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.420 [2024-07-15 15:02:04.490433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 64278 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 64278 00:06:27.793 15:02:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.052 15:02:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 64278 00:06:28.052 15:02:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 64278 ']' 00:06:28.052 15:02:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 64278 00:06:28.052 15:02:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:28.052 15:02:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.052 15:02:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64278 00:06:28.311 killing process with pid 64278 00:06:28.311 15:02:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.311 15:02:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.311 15:02:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64278' 00:06:28.311 15:02:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 64278 00:06:28.311 15:02:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 64278 00:06:31.618 00:06:31.618 real 0m5.552s 00:06:31.618 user 0m5.317s 00:06:31.618 sys 0m0.818s 00:06:31.618 15:02:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.618 15:02:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.618 ************************************ 00:06:31.618 END TEST default_locks_via_rpc 00:06:31.618 ************************************ 00:06:31.618 15:02:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:31.618 15:02:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:31.618 15:02:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.618 15:02:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.618 15:02:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.618 ************************************ 00:06:31.618 START TEST non_locking_app_on_locked_coremask 00:06:31.618 ************************************ 00:06:31.618 15:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:31.618 15:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=64373 00:06:31.618 15:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.618 15:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 64373 /var/tmp/spdk.sock 00:06:31.618 15:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64373 ']' 00:06:31.618 15:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.618 15:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.618 15:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.618 15:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.618 15:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.618 [2024-07-15 15:02:09.627086] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:06:31.618 [2024-07-15 15:02:09.627228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64373 ] 00:06:31.877 [2024-07-15 15:02:09.802326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.134 [2024-07-15 15:02:10.128377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.513 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.513 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:33.513 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=64390 00:06:33.513 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 64390 /var/tmp/spdk2.sock 00:06:33.513 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64390 ']' 00:06:33.513 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.513 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.513 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.513 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:33.513 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.513 15:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.513 [2024-07-15 15:02:11.439193] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:06:33.513 [2024-07-15 15:02:11.439741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64390 ] 00:06:33.513 [2024-07-15 15:02:11.602775] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.513 [2024-07-15 15:02:11.602854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.450 [2024-07-15 15:02:12.217361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.983 15:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.983 15:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:36.983 15:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 64373 00:06:36.983 15:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64373 00:06:36.983 15:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.983 15:02:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 64373 00:06:36.983 15:02:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64373 ']' 00:06:36.983 15:02:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64373 00:06:36.983 15:02:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:36.983 15:02:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.983 15:02:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64373 00:06:36.983 killing process with pid 64373 00:06:36.983 15:02:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.983 15:02:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.983 15:02:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64373' 00:06:36.983 15:02:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64373 00:06:36.983 15:02:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64373 00:06:43.556 15:02:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 64390 00:06:43.556 15:02:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64390 ']' 00:06:43.556 15:02:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64390 00:06:43.556 15:02:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:43.556 15:02:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.556 15:02:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64390 00:06:43.556 killing process with pid 64390 00:06:43.556 15:02:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.556 15:02:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.557 15:02:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64390' 00:06:43.557 15:02:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64390 00:06:43.557 15:02:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64390 00:06:45.463 00:06:45.463 real 0m13.573s 00:06:45.463 user 0m13.556s 00:06:45.463 sys 0m1.571s 00:06:45.463 15:02:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.463 15:02:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.463 ************************************ 00:06:45.463 END TEST non_locking_app_on_locked_coremask 00:06:45.463 ************************************ 00:06:45.463 15:02:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:45.463 15:02:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:45.463 15:02:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.463 15:02:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.463 15:02:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.463 ************************************ 00:06:45.463 START TEST locking_app_on_unlocked_coremask 00:06:45.463 ************************************ 00:06:45.463 15:02:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:45.463 15:02:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64557 00:06:45.463 15:02:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:45.463 15:02:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64557 /var/tmp/spdk.sock 00:06:45.463 15:02:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64557 ']' 00:06:45.463 15:02:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.463 15:02:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.463 15:02:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.463 15:02:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.463 15:02:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.463 [2024-07-15 15:02:23.242987] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:06:45.463 [2024-07-15 15:02:23.243224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64557 ] 00:06:45.463 [2024-07-15 15:02:23.406925] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.463 [2024-07-15 15:02:23.407078] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.721 [2024-07-15 15:02:23.680010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.659 15:02:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.659 15:02:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:46.659 15:02:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64584 00:06:46.659 15:02:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64584 /var/tmp/spdk2.sock 00:06:46.659 15:02:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.659 15:02:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64584 ']' 00:06:46.659 15:02:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.659 15:02:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.659 15:02:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.659 15:02:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.659 15:02:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.918 [2024-07-15 15:02:24.795660] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:06:46.918 [2024-07-15 15:02:24.795903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64584 ] 00:06:46.918 [2024-07-15 15:02:24.961931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.486 [2024-07-15 15:02:25.498084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.431 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.431 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:49.431 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64584 00:06:49.431 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64584 00:06:49.431 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.997 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64557 00:06:49.997 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64557 ']' 00:06:49.997 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64557 00:06:49.997 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:49.997 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.997 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64557 00:06:49.997 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.997 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.997 killing process with pid 64557 00:06:49.997 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64557' 00:06:49.997 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64557 00:06:49.997 15:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64557 00:06:56.578 15:02:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64584 00:06:56.578 15:02:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64584 ']' 00:06:56.578 15:02:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64584 00:06:56.578 15:02:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:56.578 15:02:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.578 15:02:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64584 00:06:56.578 killing process with pid 64584 00:06:56.578 15:02:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.578 15:02:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.578 15:02:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64584' 00:06:56.578 15:02:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64584 00:06:56.578 15:02:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64584 00:06:59.156 ************************************ 00:06:59.156 END TEST locking_app_on_unlocked_coremask 00:06:59.156 ************************************ 00:06:59.156 00:06:59.156 real 0m13.483s 00:06:59.156 user 0m13.815s 00:06:59.156 sys 0m1.257s 00:06:59.156 15:02:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.156 15:02:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.156 15:02:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:59.156 15:02:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:59.156 15:02:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.156 15:02:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.156 15:02:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.156 ************************************ 00:06:59.156 START TEST locking_app_on_locked_coremask 00:06:59.156 ************************************ 00:06:59.156 15:02:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:59.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.156 15:02:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64743 00:06:59.156 15:02:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.156 15:02:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64743 /var/tmp/spdk.sock 00:06:59.156 15:02:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64743 ']' 00:06:59.156 15:02:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.156 15:02:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.156 15:02:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.156 15:02:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.156 15:02:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.156 [2024-07-15 15:02:36.797585] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:06:59.156 [2024-07-15 15:02:36.797894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64743 ] 00:06:59.156 [2024-07-15 15:02:36.972104] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.156 [2024-07-15 15:02:37.220336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64765 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64765 /var/tmp/spdk2.sock 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64765 /var/tmp/spdk2.sock 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:00.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64765 /var/tmp/spdk2.sock 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64765 ']' 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.090 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.348 [2024-07-15 15:02:38.313347] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:00.348 [2024-07-15 15:02:38.313522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64765 ] 00:07:00.606 [2024-07-15 15:02:38.483538] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64743 has claimed it. 00:07:00.606 [2024-07-15 15:02:38.483626] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:00.865 ERROR: process (pid: 64765) is no longer running 00:07:00.865 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64765) - No such process 00:07:00.865 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.865 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:00.865 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:00.865 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.865 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:00.865 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.865 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64743 00:07:00.865 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64743 00:07:00.865 15:02:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.431 15:02:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64743 00:07:01.431 15:02:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64743 ']' 00:07:01.431 15:02:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64743 00:07:01.431 15:02:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:01.431 15:02:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.431 15:02:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64743 00:07:01.431 killing process with pid 64743 00:07:01.431 15:02:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.431 15:02:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.431 15:02:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64743' 00:07:01.431 15:02:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64743 00:07:01.431 15:02:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64743 00:07:03.975 ************************************ 00:07:03.975 END TEST locking_app_on_locked_coremask 00:07:03.975 ************************************ 00:07:03.975 00:07:03.975 real 0m5.355s 00:07:03.975 user 0m5.554s 00:07:03.975 sys 0m0.823s 00:07:03.975 15:02:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.975 15:02:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.975 15:02:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:03.975 15:02:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:03.975 15:02:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.975 15:02:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.975 15:02:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.232 ************************************ 00:07:04.232 START TEST locking_overlapped_coremask 00:07:04.232 ************************************ 00:07:04.232 15:02:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:04.232 15:02:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64840 00:07:04.232 15:02:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64840 /var/tmp/spdk.sock 00:07:04.232 15:02:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64840 ']' 00:07:04.232 15:02:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.232 15:02:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.232 15:02:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:04.233 15:02:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.233 15:02:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.233 15:02:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.233 [2024-07-15 15:02:42.211271] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:04.233 [2024-07-15 15:02:42.211562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64840 ] 00:07:04.497 [2024-07-15 15:02:42.375437] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.756 [2024-07-15 15:02:42.696971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.756 [2024-07-15 15:02:42.697187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.756 [2024-07-15 15:02:42.697226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64863 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64863 /var/tmp/spdk2.sock 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64863 /var/tmp/spdk2.sock 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64863 /var/tmp/spdk2.sock 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64863 ']' 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.136 15:02:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.136 [2024-07-15 15:02:44.074864] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:06.136 [2024-07-15 15:02:44.075075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64863 ] 00:07:06.395 [2024-07-15 15:02:44.248359] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64840 has claimed it. 00:07:06.395 [2024-07-15 15:02:44.248485] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:06.653 ERROR: process (pid: 64863) is no longer running 00:07:06.653 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64863) - No such process 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64840 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 64840 ']' 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 64840 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64840 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64840' 00:07:06.653 killing process with pid 64840 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 64840 00:07:06.653 15:02:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 64840 00:07:09.938 00:07:09.938 real 0m5.476s 00:07:09.938 user 0m14.085s 00:07:09.938 sys 0m0.857s 00:07:09.938 15:02:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.938 15:02:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.938 ************************************ 00:07:09.938 END TEST locking_overlapped_coremask 00:07:09.938 ************************************ 00:07:09.938 15:02:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:09.938 15:02:47 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:09.938 15:02:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.938 15:02:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.938 15:02:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.938 ************************************ 00:07:09.938 START TEST locking_overlapped_coremask_via_rpc 00:07:09.938 ************************************ 00:07:09.938 15:02:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:09.938 15:02:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64933 00:07:09.938 15:02:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:09.938 15:02:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64933 /var/tmp/spdk.sock 00:07:09.938 15:02:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64933 ']' 00:07:09.938 15:02:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.938 15:02:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.938 15:02:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.938 15:02:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.938 15:02:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.938 [2024-07-15 15:02:47.763243] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:09.938 [2024-07-15 15:02:47.763540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64933 ] 00:07:09.938 [2024-07-15 15:02:47.935074] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.938 [2024-07-15 15:02:47.935248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.197 [2024-07-15 15:02:48.189147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.197 [2024-07-15 15:02:48.189311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.197 [2024-07-15 15:02:48.189354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.133 15:02:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.133 15:02:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:11.133 15:02:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:11.133 15:02:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64955 00:07:11.133 15:02:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64955 /var/tmp/spdk2.sock 00:07:11.133 15:02:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64955 ']' 00:07:11.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.133 15:02:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.133 15:02:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.133 15:02:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.133 15:02:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.133 15:02:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.391 [2024-07-15 15:02:49.292316] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:11.391 [2024-07-15 15:02:49.292482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64955 ] 00:07:11.392 [2024-07-15 15:02:49.467563] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.392 [2024-07-15 15:02:49.467660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.328 [2024-07-15 15:02:50.113843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.328 [2024-07-15 15:02:50.113867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.328 [2024-07-15 15:02:50.113896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.927 [2024-07-15 15:02:52.561360] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64933 has claimed it. 00:07:14.927 request: 00:07:14.927 { 00:07:14.927 "method": "framework_enable_cpumask_locks", 00:07:14.927 "req_id": 1 00:07:14.927 } 00:07:14.927 Got JSON-RPC error response 00:07:14.927 response: 00:07:14.927 { 00:07:14.927 "code": -32603, 00:07:14.927 "message": "Failed to claim CPU core: 2" 00:07:14.927 } 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64933 /var/tmp/spdk.sock 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64933 ']' 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64955 /var/tmp/spdk2.sock 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64955 ']' 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.927 15:02:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.927 15:02:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.927 15:02:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:14.927 15:02:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:14.927 15:02:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:14.927 15:02:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:14.927 15:02:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:14.927 00:07:14.927 real 0m5.382s 00:07:14.927 user 0m1.466s 00:07:14.927 sys 0m0.233s 00:07:14.927 15:02:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.927 15:02:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.927 ************************************ 00:07:14.927 END TEST locking_overlapped_coremask_via_rpc 00:07:14.927 ************************************ 00:07:15.187 15:02:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:15.187 15:02:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:15.187 15:02:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64933 ]] 00:07:15.187 15:02:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64933 00:07:15.187 15:02:53 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64933 ']' 00:07:15.187 15:02:53 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64933 00:07:15.187 15:02:53 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:15.187 15:02:53 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.187 15:02:53 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64933 00:07:15.187 15:02:53 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.187 15:02:53 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.187 15:02:53 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64933' 00:07:15.187 killing process with pid 64933 00:07:15.187 15:02:53 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64933 00:07:15.187 15:02:53 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64933 00:07:18.478 15:02:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64955 ]] 00:07:18.478 15:02:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64955 00:07:18.478 15:02:55 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64955 ']' 00:07:18.478 15:02:55 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64955 00:07:18.478 15:02:55 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:18.478 15:02:55 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:18.478 15:02:55 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64955 00:07:18.478 15:02:55 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:18.478 killing process with pid 64955 00:07:18.478 15:02:55 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:18.478 15:02:55 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64955' 00:07:18.478 15:02:55 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64955 00:07:18.478 15:02:55 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64955 00:07:21.014 15:02:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:21.014 15:02:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:21.014 Process with pid 64933 is not found 00:07:21.014 15:02:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64933 ]] 00:07:21.014 15:02:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64933 00:07:21.014 15:02:58 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64933 ']' 00:07:21.014 15:02:58 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64933 00:07:21.014 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64933) - No such process 00:07:21.014 15:02:58 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64933 is not found' 00:07:21.014 15:02:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64955 ]] 00:07:21.014 15:02:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64955 00:07:21.014 15:02:58 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64955 ']' 00:07:21.014 15:02:58 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64955 00:07:21.014 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64955) - No such process 00:07:21.014 Process with pid 64955 is not found 00:07:21.014 15:02:58 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64955 is not found' 00:07:21.014 15:02:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:21.014 ************************************ 00:07:21.014 END TEST cpu_locks 00:07:21.014 00:07:21.014 real 0m59.420s 00:07:21.014 user 1m39.235s 00:07:21.014 sys 0m7.641s 00:07:21.014 15:02:58 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.014 15:02:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.014 ************************************ 00:07:21.014 15:02:58 event -- common/autotest_common.sh@1142 -- # return 0 00:07:21.014 ************************************ 00:07:21.014 END TEST event 00:07:21.014 ************************************ 00:07:21.014 00:07:21.014 real 1m32.775s 00:07:21.014 user 2m43.236s 00:07:21.014 sys 0m11.710s 00:07:21.014 15:02:58 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.014 15:02:58 event -- common/autotest_common.sh@10 -- # set +x 00:07:21.014 15:02:58 -- common/autotest_common.sh@1142 -- # return 0 00:07:21.014 15:02:58 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:21.014 15:02:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.014 15:02:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.014 15:02:58 -- common/autotest_common.sh@10 -- # set +x 00:07:21.014 ************************************ 00:07:21.014 START TEST thread 00:07:21.014 ************************************ 00:07:21.014 15:02:58 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:21.014 * Looking for test storage... 00:07:21.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:21.014 15:02:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:21.014 15:02:58 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:21.014 15:02:58 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.014 15:02:58 thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.014 ************************************ 00:07:21.014 START TEST thread_poller_perf 00:07:21.014 ************************************ 00:07:21.014 15:02:58 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:21.014 [2024-07-15 15:02:58.995277] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:21.014 [2024-07-15 15:02:58.995402] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65162 ] 00:07:21.273 [2024-07-15 15:02:59.147484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.532 [2024-07-15 15:02:59.394127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.532 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:22.907 ====================================== 00:07:22.907 busy:2302729314 (cyc) 00:07:22.907 total_run_count: 351000 00:07:22.907 tsc_hz: 2290000000 (cyc) 00:07:22.907 ====================================== 00:07:22.907 poller_cost: 6560 (cyc), 2864 (nsec) 00:07:22.907 00:07:22.907 real 0m1.950s 00:07:22.907 user 0m1.728s 00:07:22.907 sys 0m0.113s 00:07:22.907 15:03:00 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.907 15:03:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:22.907 ************************************ 00:07:22.907 END TEST thread_poller_perf 00:07:22.907 ************************************ 00:07:22.907 15:03:00 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:22.907 15:03:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:22.907 15:03:00 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:22.907 15:03:00 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.907 15:03:00 thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.907 ************************************ 00:07:22.907 START TEST thread_poller_perf 00:07:22.907 ************************************ 00:07:22.907 15:03:00 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:22.907 [2024-07-15 15:03:01.002264] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:22.907 [2024-07-15 15:03:01.002466] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65199 ] 00:07:23.165 [2024-07-15 15:03:01.164628] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.423 [2024-07-15 15:03:01.440041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.423 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:24.802 ====================================== 00:07:24.802 busy:2294663592 (cyc) 00:07:24.802 total_run_count: 4116000 00:07:24.802 tsc_hz: 2290000000 (cyc) 00:07:24.802 ====================================== 00:07:24.802 poller_cost: 557 (cyc), 243 (nsec) 00:07:25.060 00:07:25.061 real 0m1.977s 00:07:25.061 user 0m1.745s 00:07:25.061 sys 0m0.120s 00:07:25.061 15:03:02 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.061 15:03:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.061 ************************************ 00:07:25.061 END TEST thread_poller_perf 00:07:25.061 ************************************ 00:07:25.061 15:03:02 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:25.061 15:03:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:25.061 00:07:25.061 real 0m4.157s 00:07:25.061 user 0m3.565s 00:07:25.061 sys 0m0.375s 00:07:25.061 15:03:02 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.061 15:03:02 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.061 ************************************ 00:07:25.061 END TEST thread 00:07:25.061 ************************************ 00:07:25.061 15:03:03 -- common/autotest_common.sh@1142 -- # return 0 00:07:25.061 15:03:03 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:25.061 15:03:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.061 15:03:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.061 15:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:25.061 ************************************ 00:07:25.061 START TEST accel 00:07:25.061 ************************************ 00:07:25.061 15:03:03 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:25.061 * Looking for test storage... 00:07:25.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:25.061 15:03:03 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:25.061 15:03:03 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:25.061 15:03:03 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:25.061 15:03:03 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=65284 00:07:25.061 15:03:03 accel -- accel/accel.sh@63 -- # waitforlisten 65284 00:07:25.061 15:03:03 accel -- common/autotest_common.sh@829 -- # '[' -z 65284 ']' 00:07:25.061 15:03:03 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:25.061 15:03:03 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:25.061 15:03:03 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.061 15:03:03 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.061 15:03:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.061 15:03:03 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.318 15:03:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.318 15:03:03 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.318 15:03:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.318 15:03:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.318 15:03:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.318 15:03:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.318 15:03:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:25.318 15:03:03 accel -- accel/accel.sh@41 -- # jq -r . 00:07:25.318 [2024-07-15 15:03:03.270782] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:25.318 [2024-07-15 15:03:03.270922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65284 ] 00:07:25.576 [2024-07-15 15:03:03.436558] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.576 [2024-07-15 15:03:03.683649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.948 15:03:04 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.948 15:03:04 accel -- common/autotest_common.sh@862 -- # return 0 00:07:26.948 15:03:04 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:26.948 15:03:04 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:26.948 15:03:04 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:26.948 15:03:04 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:26.948 15:03:04 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:26.948 15:03:04 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:26.948 15:03:04 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:26.948 15:03:04 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.948 15:03:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.948 15:03:04 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.948 15:03:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.948 15:03:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.948 15:03:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.948 15:03:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.948 15:03:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.948 15:03:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.948 15:03:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.948 15:03:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.948 15:03:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.948 15:03:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.948 15:03:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.948 15:03:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.948 15:03:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.948 15:03:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.948 15:03:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.948 15:03:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.948 15:03:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.948 15:03:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.948 15:03:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.948 15:03:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.948 15:03:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.948 15:03:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.948 15:03:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.948 15:03:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.948 15:03:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.948 15:03:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.948 15:03:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.948 15:03:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.948 15:03:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.948 15:03:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.948 15:03:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.948 15:03:04 accel -- accel/accel.sh@75 -- # killprocess 65284 00:07:26.948 15:03:04 accel -- common/autotest_common.sh@948 -- # '[' -z 65284 ']' 00:07:26.948 15:03:04 accel -- common/autotest_common.sh@952 -- # kill -0 65284 00:07:26.948 15:03:04 accel -- common/autotest_common.sh@953 -- # uname 00:07:26.948 15:03:04 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.949 15:03:04 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65284 00:07:26.949 killing process with pid 65284 00:07:26.949 15:03:04 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.949 15:03:04 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.949 15:03:04 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65284' 00:07:26.949 15:03:04 accel -- common/autotest_common.sh@967 -- # kill 65284 00:07:26.949 15:03:04 accel -- common/autotest_common.sh@972 -- # wait 65284 00:07:30.232 15:03:07 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:30.232 15:03:07 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:30.232 15:03:07 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:30.232 15:03:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.232 15:03:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.232 15:03:07 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:30.232 15:03:07 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:30.232 15:03:07 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:30.232 15:03:07 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.232 15:03:07 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.232 15:03:07 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.232 15:03:07 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.232 15:03:07 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.232 15:03:07 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:30.232 15:03:07 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:30.232 15:03:07 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.232 15:03:07 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:30.232 15:03:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.232 15:03:07 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:30.232 15:03:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:30.232 15:03:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.232 15:03:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.232 ************************************ 00:07:30.232 START TEST accel_missing_filename 00:07:30.232 ************************************ 00:07:30.232 15:03:07 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:30.232 15:03:07 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:30.232 15:03:07 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:30.232 15:03:07 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:30.232 15:03:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.232 15:03:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:30.232 15:03:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.232 15:03:07 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:30.232 15:03:07 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:30.232 15:03:07 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:30.232 15:03:07 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.232 15:03:07 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.232 15:03:07 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.232 15:03:07 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.232 15:03:07 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.232 15:03:07 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:30.233 15:03:07 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:30.233 [2024-07-15 15:03:07.854986] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:30.233 [2024-07-15 15:03:07.855235] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65372 ] 00:07:30.233 [2024-07-15 15:03:08.022744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.233 [2024-07-15 15:03:08.289461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.492 [2024-07-15 15:03:08.568206] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.431 [2024-07-15 15:03:09.231335] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:31.690 A filename is required. 00:07:31.690 15:03:09 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:31.690 ************************************ 00:07:31.691 END TEST accel_missing_filename 00:07:31.691 ************************************ 00:07:31.691 15:03:09 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:31.691 15:03:09 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:31.691 15:03:09 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:31.691 15:03:09 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:31.691 15:03:09 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:31.691 00:07:31.691 real 0m1.953s 00:07:31.691 user 0m1.707s 00:07:31.691 sys 0m0.177s 00:07:31.691 15:03:09 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.691 15:03:09 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:31.691 15:03:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.691 15:03:09 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.691 15:03:09 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:31.691 15:03:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.691 15:03:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.691 ************************************ 00:07:31.691 START TEST accel_compress_verify 00:07:31.691 ************************************ 00:07:31.691 15:03:09 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.691 15:03:09 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:31.691 15:03:09 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.691 15:03:09 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:31.950 15:03:09 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.950 15:03:09 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:31.950 15:03:09 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.950 15:03:09 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.950 15:03:09 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.950 15:03:09 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:31.950 15:03:09 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.950 15:03:09 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.950 15:03:09 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.950 15:03:09 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.950 15:03:09 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.950 15:03:09 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:31.950 15:03:09 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:31.950 [2024-07-15 15:03:09.845569] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:31.950 [2024-07-15 15:03:09.845828] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65414 ] 00:07:31.950 [2024-07-15 15:03:10.020558] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.209 [2024-07-15 15:03:10.289162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.468 [2024-07-15 15:03:10.567429] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.404 [2024-07-15 15:03:11.226284] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:33.664 00:07:33.664 Compression does not support the verify option, aborting. 00:07:33.664 15:03:11 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:33.664 15:03:11 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:33.664 15:03:11 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:33.664 15:03:11 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:33.664 15:03:11 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:33.664 15:03:11 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:33.664 00:07:33.664 real 0m1.917s 00:07:33.664 user 0m1.690s 00:07:33.664 sys 0m0.165s 00:07:33.664 ************************************ 00:07:33.664 END TEST accel_compress_verify 00:07:33.664 ************************************ 00:07:33.664 15:03:11 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.664 15:03:11 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:33.664 15:03:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.664 15:03:11 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:33.664 15:03:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:33.664 15:03:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.664 15:03:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.923 ************************************ 00:07:33.923 START TEST accel_wrong_workload 00:07:33.923 ************************************ 00:07:33.923 15:03:11 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:33.923 15:03:11 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:33.923 15:03:11 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:33.923 15:03:11 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:33.923 15:03:11 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.923 15:03:11 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:33.923 15:03:11 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.923 15:03:11 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:33.923 15:03:11 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:33.923 15:03:11 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:33.923 15:03:11 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.923 15:03:11 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.923 15:03:11 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.923 15:03:11 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.923 15:03:11 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.923 15:03:11 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:33.923 15:03:11 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:33.923 Unsupported workload type: foobar 00:07:33.923 [2024-07-15 15:03:11.835575] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:33.923 accel_perf options: 00:07:33.923 [-h help message] 00:07:33.923 [-q queue depth per core] 00:07:33.923 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:33.923 [-T number of threads per core 00:07:33.923 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:33.923 [-t time in seconds] 00:07:33.924 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:33.924 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:33.924 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:33.924 [-l for compress/decompress workloads, name of uncompressed input file 00:07:33.924 [-S for crc32c workload, use this seed value (default 0) 00:07:33.924 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:33.924 [-f for fill workload, use this BYTE value (default 255) 00:07:33.924 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:33.924 [-y verify result if this switch is on] 00:07:33.924 [-a tasks to allocate per core (default: same value as -q)] 00:07:33.924 Can be used to spread operations across a wider range of memory. 00:07:33.924 15:03:11 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:33.924 15:03:11 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:33.924 15:03:11 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:33.924 15:03:11 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:33.924 00:07:33.924 real 0m0.094s 00:07:33.924 user 0m0.084s 00:07:33.924 sys 0m0.052s 00:07:33.924 15:03:11 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.924 15:03:11 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:33.924 ************************************ 00:07:33.924 END TEST accel_wrong_workload 00:07:33.924 ************************************ 00:07:33.924 15:03:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.924 15:03:11 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:33.924 15:03:11 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:33.924 15:03:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.924 15:03:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.924 ************************************ 00:07:33.924 START TEST accel_negative_buffers 00:07:33.924 ************************************ 00:07:33.924 15:03:11 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:33.924 15:03:11 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:33.924 15:03:11 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:33.924 15:03:11 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:33.924 15:03:11 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.924 15:03:11 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:33.924 15:03:11 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.924 15:03:11 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:33.924 15:03:11 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:33.924 15:03:11 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:33.924 15:03:11 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.924 15:03:11 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.924 15:03:11 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.924 15:03:11 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.924 15:03:11 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.924 15:03:11 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:33.924 15:03:11 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:33.924 -x option must be non-negative. 00:07:33.924 [2024-07-15 15:03:11.976721] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:33.924 accel_perf options: 00:07:33.924 [-h help message] 00:07:33.924 [-q queue depth per core] 00:07:33.924 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:33.924 [-T number of threads per core 00:07:33.924 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:33.924 [-t time in seconds] 00:07:33.924 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:33.924 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:33.924 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:33.924 [-l for compress/decompress workloads, name of uncompressed input file 00:07:33.924 [-S for crc32c workload, use this seed value (default 0) 00:07:33.924 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:33.924 [-f for fill workload, use this BYTE value (default 255) 00:07:33.924 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:33.924 [-y verify result if this switch is on] 00:07:33.924 [-a tasks to allocate per core (default: same value as -q)] 00:07:33.924 Can be used to spread operations across a wider range of memory. 00:07:33.924 15:03:12 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:33.924 15:03:12 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:33.924 ************************************ 00:07:33.924 END TEST accel_negative_buffers 00:07:33.924 ************************************ 00:07:33.924 15:03:12 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:33.924 15:03:12 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:33.924 00:07:33.924 real 0m0.086s 00:07:33.924 user 0m0.082s 00:07:33.924 sys 0m0.045s 00:07:33.924 15:03:12 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.924 15:03:12 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:34.183 15:03:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.183 15:03:12 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:34.183 15:03:12 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:34.183 15:03:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.183 15:03:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.183 ************************************ 00:07:34.184 START TEST accel_crc32c 00:07:34.184 ************************************ 00:07:34.184 15:03:12 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:34.184 15:03:12 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:34.184 15:03:12 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:34.184 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.184 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.184 15:03:12 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:34.184 15:03:12 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:34.184 15:03:12 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:34.184 15:03:12 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.184 15:03:12 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.184 15:03:12 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.184 15:03:12 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.184 15:03:12 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.184 15:03:12 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:34.184 15:03:12 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:34.184 [2024-07-15 15:03:12.125187] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:34.184 [2024-07-15 15:03:12.125308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65492 ] 00:07:34.184 [2024-07-15 15:03:12.292317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.443 [2024-07-15 15:03:12.553479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.012 15:03:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:36.933 ************************************ 00:07:36.933 END TEST accel_crc32c 00:07:36.933 ************************************ 00:07:36.933 15:03:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.933 00:07:36.933 real 0m2.933s 00:07:36.933 user 0m2.662s 00:07:36.933 sys 0m0.180s 00:07:36.933 15:03:14 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.933 15:03:14 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:37.192 15:03:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.192 15:03:15 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:37.192 15:03:15 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:37.192 15:03:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.192 15:03:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.192 ************************************ 00:07:37.192 START TEST accel_crc32c_C2 00:07:37.192 ************************************ 00:07:37.192 15:03:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:37.192 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.192 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:37.192 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.192 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.192 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:37.192 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:37.192 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.192 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.192 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.192 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.192 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.192 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.192 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:37.192 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:37.192 [2024-07-15 15:03:15.117227] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:37.192 [2024-07-15 15:03:15.117906] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65544 ] 00:07:37.192 [2024-07-15 15:03:15.294261] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.451 [2024-07-15 15:03:15.556744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.020 15:03:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.937 00:07:39.937 real 0m2.839s 00:07:39.937 user 0m2.574s 00:07:39.937 sys 0m0.175s 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.937 15:03:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:39.937 ************************************ 00:07:39.937 END TEST accel_crc32c_C2 00:07:39.937 ************************************ 00:07:39.937 15:03:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.937 15:03:17 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:39.937 15:03:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:39.937 15:03:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.937 15:03:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.937 ************************************ 00:07:39.937 START TEST accel_copy 00:07:39.937 ************************************ 00:07:39.937 15:03:17 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:39.937 15:03:17 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:39.937 15:03:17 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:39.937 15:03:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.937 15:03:17 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:39.937 15:03:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.937 15:03:17 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:39.937 15:03:17 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:39.937 15:03:17 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.937 15:03:17 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.937 15:03:17 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.937 15:03:17 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.937 15:03:17 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.937 15:03:17 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:39.937 15:03:17 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:39.937 [2024-07-15 15:03:18.018173] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:39.937 [2024-07-15 15:03:18.018392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65591 ] 00:07:40.195 [2024-07-15 15:03:18.177674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.452 [2024-07-15 15:03:18.433966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.710 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.710 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.710 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.710 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.710 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.710 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.711 15:03:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:43.242 15:03:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.242 00:07:43.242 real 0m2.833s 00:07:43.242 user 0m2.568s 00:07:43.242 sys 0m0.173s 00:07:43.242 15:03:20 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.242 ************************************ 00:07:43.242 END TEST accel_copy 00:07:43.242 ************************************ 00:07:43.242 15:03:20 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:43.242 15:03:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.242 15:03:20 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.242 15:03:20 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:43.242 15:03:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.242 15:03:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.242 ************************************ 00:07:43.242 START TEST accel_fill 00:07:43.242 ************************************ 00:07:43.242 15:03:20 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.242 15:03:20 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:43.242 15:03:20 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:43.242 15:03:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.242 15:03:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.242 15:03:20 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.242 15:03:20 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.242 15:03:20 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:43.242 15:03:20 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.242 15:03:20 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.242 15:03:20 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.242 15:03:20 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.242 15:03:20 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.242 15:03:20 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:43.242 15:03:20 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:43.242 [2024-07-15 15:03:20.919285] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:43.242 [2024-07-15 15:03:20.919520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65637 ] 00:07:43.242 [2024-07-15 15:03:21.090539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.242 [2024-07-15 15:03:21.350590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.809 15:03:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:45.709 15:03:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.968 15:03:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.968 15:03:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:45.968 15:03:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.968 00:07:45.968 real 0m2.966s 00:07:45.968 user 0m0.019s 00:07:45.968 sys 0m0.002s 00:07:45.968 15:03:23 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.968 ************************************ 00:07:45.968 END TEST accel_fill 00:07:45.968 ************************************ 00:07:45.968 15:03:23 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:45.968 15:03:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.968 15:03:23 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:45.968 15:03:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:45.968 15:03:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.968 15:03:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.968 ************************************ 00:07:45.968 START TEST accel_copy_crc32c 00:07:45.968 ************************************ 00:07:45.968 15:03:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:45.968 15:03:23 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:45.968 15:03:23 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:45.968 15:03:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.968 15:03:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.968 15:03:23 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:45.968 15:03:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:45.968 15:03:23 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.968 15:03:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:45.968 15:03:23 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.968 15:03:23 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.968 15:03:23 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.968 15:03:23 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.968 15:03:23 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:45.968 15:03:23 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:45.968 [2024-07-15 15:03:23.930541] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:45.968 [2024-07-15 15:03:23.930682] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65689 ] 00:07:46.227 [2024-07-15 15:03:24.097048] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.485 [2024-07-15 15:03:24.371454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 15:03:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.309 00:07:49.309 real 0m2.967s 00:07:49.309 user 0m2.704s 00:07:49.309 sys 0m0.173s 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.309 15:03:26 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:49.309 ************************************ 00:07:49.309 END TEST accel_copy_crc32c 00:07:49.309 ************************************ 00:07:49.309 15:03:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.309 15:03:26 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:49.309 15:03:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:49.309 15:03:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.309 15:03:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.309 ************************************ 00:07:49.309 START TEST accel_copy_crc32c_C2 00:07:49.309 ************************************ 00:07:49.309 15:03:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:49.309 15:03:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.309 15:03:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:49.309 15:03:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.309 15:03:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:49.309 15:03:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.309 15:03:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:49.309 15:03:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.309 15:03:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.310 15:03:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.310 15:03:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.310 15:03:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.310 15:03:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.310 15:03:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:49.310 15:03:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:49.310 [2024-07-15 15:03:26.952348] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:49.310 [2024-07-15 15:03:26.952586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65741 ] 00:07:49.310 [2024-07-15 15:03:27.122978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.310 [2024-07-15 15:03:27.392078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.877 15:03:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.775 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.033 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.033 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:52.033 15:03:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.033 00:07:52.033 real 0m3.003s 00:07:52.033 user 0m2.705s 00:07:52.033 sys 0m0.197s 00:07:52.033 ************************************ 00:07:52.033 END TEST accel_copy_crc32c_C2 00:07:52.033 ************************************ 00:07:52.033 15:03:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.033 15:03:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:52.033 15:03:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.033 15:03:29 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:52.033 15:03:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:52.033 15:03:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.033 15:03:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.033 ************************************ 00:07:52.033 START TEST accel_dualcast 00:07:52.033 ************************************ 00:07:52.033 15:03:29 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:52.033 15:03:29 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:52.033 15:03:29 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:52.033 15:03:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.033 15:03:29 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:52.033 15:03:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.033 15:03:29 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:52.033 15:03:29 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:52.034 15:03:29 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.034 15:03:29 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.034 15:03:29 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.034 15:03:29 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.034 15:03:29 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.034 15:03:29 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:52.034 15:03:29 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:52.034 [2024-07-15 15:03:30.017162] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:52.034 [2024-07-15 15:03:30.017378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65788 ] 00:07:52.291 [2024-07-15 15:03:30.186584] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.549 [2024-07-15 15:03:30.445060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.807 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.808 15:03:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:52.808 15:03:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.808 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.808 15:03:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.726 15:03:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.985 15:03:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.985 15:03:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:54.985 15:03:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.985 00:07:54.985 real 0m2.882s 00:07:54.985 user 0m2.607s 00:07:54.985 sys 0m0.184s 00:07:54.985 15:03:32 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.985 ************************************ 00:07:54.985 END TEST accel_dualcast 00:07:54.985 ************************************ 00:07:54.985 15:03:32 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:54.985 15:03:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:54.985 15:03:32 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:54.985 15:03:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:54.985 15:03:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.985 15:03:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.985 ************************************ 00:07:54.985 START TEST accel_compare 00:07:54.985 ************************************ 00:07:54.985 15:03:32 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:54.985 15:03:32 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:54.985 15:03:32 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:54.985 15:03:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:54.985 15:03:32 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:54.985 15:03:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:54.985 15:03:32 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:54.985 15:03:32 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:54.985 15:03:32 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.985 15:03:32 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.985 15:03:32 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.986 15:03:32 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.986 15:03:32 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.986 15:03:32 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:54.986 15:03:32 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:54.986 [2024-07-15 15:03:32.965572] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:54.986 [2024-07-15 15:03:32.965802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65840 ] 00:07:55.243 [2024-07-15 15:03:33.133257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.501 [2024-07-15 15:03:33.383579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:55.759 15:03:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.660 15:03:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.660 15:03:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.660 15:03:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.660 15:03:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.660 15:03:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.660 15:03:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.660 15:03:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.660 15:03:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:57.661 15:03:35 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.661 00:07:57.661 real 0m2.844s 00:07:57.661 user 0m2.571s 00:07:57.661 sys 0m0.181s 00:07:57.661 15:03:35 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.661 15:03:35 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:57.661 ************************************ 00:07:57.661 END TEST accel_compare 00:07:57.661 ************************************ 00:07:57.919 15:03:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:57.919 15:03:35 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:57.919 15:03:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:57.919 15:03:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.919 15:03:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.919 ************************************ 00:07:57.919 START TEST accel_xor 00:07:57.919 ************************************ 00:07:57.919 15:03:35 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:57.919 15:03:35 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:57.919 15:03:35 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:57.919 15:03:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:57.919 15:03:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:57.919 15:03:35 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:57.919 15:03:35 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:57.919 15:03:35 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:57.919 15:03:35 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.919 15:03:35 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.919 15:03:35 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.919 15:03:35 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.919 15:03:35 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.919 15:03:35 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:57.919 15:03:35 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:57.919 [2024-07-15 15:03:35.869232] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:07:57.919 [2024-07-15 15:03:35.869358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65892 ] 00:07:58.178 [2024-07-15 15:03:36.035894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.436 [2024-07-15 15:03:36.295062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.695 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.696 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:58.696 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.696 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.696 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.696 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.696 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.696 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.696 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.696 15:03:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.696 15:03:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.696 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.696 15:03:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:00.618 15:03:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.618 00:08:00.618 real 0m2.867s 00:08:00.618 user 0m2.570s 00:08:00.618 sys 0m0.206s 00:08:00.618 15:03:38 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.618 15:03:38 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:00.618 ************************************ 00:08:00.618 END TEST accel_xor 00:08:00.618 ************************************ 00:08:00.618 15:03:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:00.618 15:03:38 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:00.618 15:03:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:00.618 15:03:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.618 15:03:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.878 ************************************ 00:08:00.878 START TEST accel_xor 00:08:00.878 ************************************ 00:08:00.878 15:03:38 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:00.878 15:03:38 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:00.878 15:03:38 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:00.878 15:03:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.878 15:03:38 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:00.878 15:03:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.878 15:03:38 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:00.878 15:03:38 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:00.878 15:03:38 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.878 15:03:38 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.878 15:03:38 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.878 15:03:38 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.878 15:03:38 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.878 15:03:38 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:00.878 15:03:38 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:00.878 [2024-07-15 15:03:38.792919] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:00.878 [2024-07-15 15:03:38.793552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65943 ] 00:08:00.878 [2024-07-15 15:03:38.957682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.136 [2024-07-15 15:03:39.207087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.395 15:03:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.928 15:03:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.928 15:03:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.928 15:03:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.928 15:03:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.928 15:03:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.928 15:03:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.928 15:03:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.928 15:03:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.928 15:03:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.928 15:03:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.928 15:03:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.928 15:03:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.928 15:03:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.928 15:03:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.928 15:03:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.929 15:03:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.929 15:03:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.929 15:03:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.929 15:03:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.929 15:03:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.929 15:03:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.929 15:03:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.929 15:03:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.929 15:03:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.929 15:03:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.929 ************************************ 00:08:03.929 END TEST accel_xor 00:08:03.929 15:03:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:03.929 15:03:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.929 00:08:03.929 real 0m2.977s 00:08:03.929 user 0m2.712s 00:08:03.929 sys 0m0.172s 00:08:03.929 15:03:41 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.929 15:03:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:03.929 ************************************ 00:08:03.929 15:03:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:03.929 15:03:41 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:03.929 15:03:41 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:03.929 15:03:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.929 15:03:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.929 ************************************ 00:08:03.929 START TEST accel_dif_verify 00:08:03.929 ************************************ 00:08:03.929 15:03:41 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:03.929 15:03:41 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:03.929 15:03:41 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:03.929 15:03:41 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:03.929 15:03:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.929 15:03:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.929 15:03:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:03.929 15:03:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:03.929 15:03:41 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.929 15:03:41 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.929 15:03:41 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.929 15:03:41 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.929 15:03:41 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.929 15:03:41 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:03.929 15:03:41 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:03.929 [2024-07-15 15:03:41.831886] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:03.929 [2024-07-15 15:03:41.832142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65991 ] 00:08:03.929 [2024-07-15 15:03:42.000898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.187 [2024-07-15 15:03:42.247844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.446 15:03:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.977 ************************************ 00:08:06.977 END TEST accel_dif_verify 00:08:06.977 ************************************ 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:06.977 15:03:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.977 00:08:06.977 real 0m2.818s 00:08:06.977 user 0m0.020s 00:08:06.977 sys 0m0.004s 00:08:06.977 15:03:44 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.977 15:03:44 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:06.977 15:03:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:06.977 15:03:44 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:06.977 15:03:44 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:06.977 15:03:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.977 15:03:44 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.977 ************************************ 00:08:06.977 START TEST accel_dif_generate 00:08:06.977 ************************************ 00:08:06.977 15:03:44 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:06.977 15:03:44 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:06.977 15:03:44 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:06.977 15:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:06.977 15:03:44 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:06.977 15:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:06.977 15:03:44 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:06.977 15:03:44 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:06.977 15:03:44 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.977 15:03:44 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.977 15:03:44 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.977 15:03:44 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.977 15:03:44 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.977 15:03:44 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:06.977 15:03:44 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:06.977 [2024-07-15 15:03:44.695624] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:06.977 [2024-07-15 15:03:44.695848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66043 ] 00:08:06.977 [2024-07-15 15:03:44.862915] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.235 [2024-07-15 15:03:45.121427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.495 15:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.529 15:03:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.789 15:03:47 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.789 ************************************ 00:08:09.789 END TEST accel_dif_generate 00:08:09.789 ************************************ 00:08:09.789 15:03:47 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:09.789 15:03:47 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.789 00:08:09.789 real 0m2.830s 00:08:09.789 user 0m0.020s 00:08:09.789 sys 0m0.005s 00:08:09.789 15:03:47 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.789 15:03:47 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:09.789 15:03:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.789 15:03:47 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:09.789 15:03:47 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:09.789 15:03:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.789 15:03:47 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.789 ************************************ 00:08:09.789 START TEST accel_dif_generate_copy 00:08:09.789 ************************************ 00:08:09.789 15:03:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:09.789 15:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:09.789 15:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:09.789 15:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.789 15:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:09.789 15:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.789 15:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:09.789 15:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:09.789 15:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.789 15:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.789 15:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.789 15:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.789 15:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.789 15:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:09.789 15:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:09.789 [2024-07-15 15:03:47.753975] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:09.789 [2024-07-15 15:03:47.754196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66095 ] 00:08:10.048 [2024-07-15 15:03:47.919793] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.307 [2024-07-15 15:03:48.173232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.567 15:03:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.470 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:12.471 ************************************ 00:08:12.471 END TEST accel_dif_generate_copy 00:08:12.471 ************************************ 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:12.471 00:08:12.471 real 0m2.816s 00:08:12.471 user 0m2.545s 00:08:12.471 sys 0m0.174s 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.471 15:03:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:12.471 15:03:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:12.471 15:03:50 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:12.471 15:03:50 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.471 15:03:50 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:12.471 15:03:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.471 15:03:50 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.471 ************************************ 00:08:12.471 START TEST accel_comp 00:08:12.471 ************************************ 00:08:12.471 15:03:50 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.471 15:03:50 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:12.471 15:03:50 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:12.471 15:03:50 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.471 15:03:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.471 15:03:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.471 15:03:50 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.471 15:03:50 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:12.471 15:03:50 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.471 15:03:50 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.471 15:03:50 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.471 15:03:50 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.471 15:03:50 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.471 15:03:50 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:12.471 15:03:50 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:12.730 [2024-07-15 15:03:50.628856] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:12.730 [2024-07-15 15:03:50.629004] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66136 ] 00:08:12.730 [2024-07-15 15:03:50.797640] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.989 [2024-07-15 15:03:51.035110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.248 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.249 15:03:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:15.785 15:03:53 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.785 00:08:15.785 real 0m2.772s 00:08:15.785 user 0m2.497s 00:08:15.785 sys 0m0.188s 00:08:15.785 ************************************ 00:08:15.785 END TEST accel_comp 00:08:15.785 ************************************ 00:08:15.785 15:03:53 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.785 15:03:53 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:15.785 15:03:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:15.785 15:03:53 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:15.785 15:03:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:15.785 15:03:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.785 15:03:53 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.785 ************************************ 00:08:15.785 START TEST accel_decomp 00:08:15.785 ************************************ 00:08:15.785 15:03:53 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:15.785 15:03:53 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:15.785 15:03:53 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:15.785 15:03:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.785 15:03:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.785 15:03:53 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:15.785 15:03:53 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:15.785 15:03:53 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:15.785 15:03:53 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.785 15:03:53 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.785 15:03:53 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.785 15:03:53 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.785 15:03:53 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.785 15:03:53 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:15.785 15:03:53 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:15.785 [2024-07-15 15:03:53.457937] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:15.785 [2024-07-15 15:03:53.458081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66188 ] 00:08:15.785 [2024-07-15 15:03:53.624161] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.785 [2024-07-15 15:03:53.877907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.044 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:16.304 15:03:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:18.215 15:03:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.215 00:08:18.215 real 0m2.860s 00:08:18.215 user 0m2.589s 00:08:18.215 sys 0m0.186s 00:08:18.215 15:03:56 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.215 15:03:56 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:18.215 ************************************ 00:08:18.215 END TEST accel_decomp 00:08:18.215 ************************************ 00:08:18.215 15:03:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:18.215 15:03:56 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:18.215 15:03:56 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:18.215 15:03:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.215 15:03:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.215 ************************************ 00:08:18.215 START TEST accel_decomp_full 00:08:18.215 ************************************ 00:08:18.215 15:03:56 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:18.215 15:03:56 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:18.215 15:03:56 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:18.215 15:03:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.215 15:03:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.215 15:03:56 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:18.215 15:03:56 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:18.215 15:03:56 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:18.215 15:03:56 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.215 15:03:56 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.215 15:03:56 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.215 15:03:56 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.215 15:03:56 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.215 15:03:56 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:18.215 15:03:56 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:18.474 [2024-07-15 15:03:56.373028] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:18.474 [2024-07-15 15:03:56.373155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66240 ] 00:08:18.474 [2024-07-15 15:03:56.541118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.733 [2024-07-15 15:03:56.797540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.992 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.993 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.993 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.993 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.993 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.993 15:03:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.993 15:03:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.993 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.993 15:03:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:21.527 15:03:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.527 00:08:21.527 real 0m2.908s 00:08:21.527 user 0m2.642s 00:08:21.527 sys 0m0.177s 00:08:21.527 15:03:59 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.527 15:03:59 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:21.527 ************************************ 00:08:21.527 END TEST accel_decomp_full 00:08:21.527 ************************************ 00:08:21.527 15:03:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:21.527 15:03:59 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:21.527 15:03:59 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:21.527 15:03:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.527 15:03:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.527 ************************************ 00:08:21.527 START TEST accel_decomp_mcore 00:08:21.527 ************************************ 00:08:21.527 15:03:59 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:21.527 15:03:59 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:21.527 15:03:59 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:21.527 15:03:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.527 15:03:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.527 15:03:59 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:21.527 15:03:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:21.527 15:03:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:21.527 15:03:59 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.527 15:03:59 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.527 15:03:59 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.527 15:03:59 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.527 15:03:59 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.527 15:03:59 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:21.527 15:03:59 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:21.527 [2024-07-15 15:03:59.333127] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:21.527 [2024-07-15 15:03:59.333250] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66291 ] 00:08:21.527 [2024-07-15 15:03:59.502370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.786 [2024-07-15 15:03:59.749734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.786 [2024-07-15 15:03:59.749982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.786 [2024-07-15 15:03:59.750102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.786 [2024-07-15 15:03:59.750133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.045 15:04:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.575 00:08:24.575 real 0m2.873s 00:08:24.575 user 0m0.020s 00:08:24.575 sys 0m0.002s 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.575 15:04:02 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:24.575 ************************************ 00:08:24.575 END TEST accel_decomp_mcore 00:08:24.575 ************************************ 00:08:24.575 15:04:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:24.575 15:04:02 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:24.575 15:04:02 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:24.575 15:04:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.575 15:04:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.575 ************************************ 00:08:24.575 START TEST accel_decomp_full_mcore 00:08:24.575 ************************************ 00:08:24.575 15:04:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:24.575 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:24.575 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:24.575 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.575 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.575 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:24.575 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:24.575 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:24.575 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.575 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.575 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.575 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.575 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.575 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:24.575 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:24.575 [2024-07-15 15:04:02.264029] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:24.575 [2024-07-15 15:04:02.264160] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66342 ] 00:08:24.575 [2024-07-15 15:04:02.431492] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.575 [2024-07-15 15:04:02.684429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.575 [2024-07-15 15:04:02.684674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.575 [2024-07-15 15:04:02.684751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.575 [2024-07-15 15:04:02.684775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.834 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.094 15:04:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:27.643 ************************************ 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.643 00:08:27.643 real 0m3.032s 00:08:27.643 user 0m8.683s 00:08:27.643 sys 0m0.214s 00:08:27.643 15:04:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.644 15:04:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:27.644 END TEST accel_decomp_full_mcore 00:08:27.644 ************************************ 00:08:27.644 15:04:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:27.644 15:04:05 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:27.644 15:04:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:27.644 15:04:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.644 15:04:05 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.644 ************************************ 00:08:27.644 START TEST accel_decomp_mthread 00:08:27.644 ************************************ 00:08:27.644 15:04:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:27.644 15:04:05 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:27.644 15:04:05 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:27.644 15:04:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:27.644 15:04:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:27.644 15:04:05 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:27.644 15:04:05 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:27.644 15:04:05 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:27.644 15:04:05 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.644 15:04:05 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.644 15:04:05 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.644 15:04:05 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.644 15:04:05 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.644 15:04:05 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:27.644 15:04:05 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:27.644 [2024-07-15 15:04:05.364254] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:27.644 [2024-07-15 15:04:05.364469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66397 ] 00:08:27.644 [2024-07-15 15:04:05.534786] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.903 [2024-07-15 15:04:05.823824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.163 15:04:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.698 ************************************ 00:08:30.698 END TEST accel_decomp_mthread 00:08:30.698 ************************************ 00:08:30.698 00:08:30.698 real 0m2.989s 00:08:30.698 user 0m2.647s 00:08:30.698 sys 0m0.248s 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.698 15:04:08 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:30.698 15:04:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:30.698 15:04:08 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:30.698 15:04:08 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:30.698 15:04:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.698 15:04:08 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.698 ************************************ 00:08:30.698 START TEST accel_decomp_full_mthread 00:08:30.698 ************************************ 00:08:30.698 15:04:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:30.698 15:04:08 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:30.698 15:04:08 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:30.698 15:04:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.698 15:04:08 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:30.698 15:04:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.698 15:04:08 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:30.698 15:04:08 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:30.699 15:04:08 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.699 15:04:08 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.699 15:04:08 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.699 15:04:08 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.699 15:04:08 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.699 15:04:08 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:30.699 15:04:08 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:30.699 [2024-07-15 15:04:08.416028] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:30.699 [2024-07-15 15:04:08.416231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66449 ] 00:08:30.699 [2024-07-15 15:04:08.585745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.958 [2024-07-15 15:04:08.872263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.216 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.217 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:31.217 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.217 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.217 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.217 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.217 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.217 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.217 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.217 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.217 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.217 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.217 15:04:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.763 00:08:33.763 real 0m3.054s 00:08:33.763 user 0m2.698s 00:08:33.763 sys 0m0.253s 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.763 ************************************ 00:08:33.763 END TEST accel_decomp_full_mthread 00:08:33.763 ************************************ 00:08:33.763 15:04:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:33.763 15:04:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.763 15:04:11 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:33.763 15:04:11 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:33.763 15:04:11 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:33.763 15:04:11 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.763 15:04:11 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.763 15:04:11 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.763 15:04:11 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.763 15:04:11 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:33.763 15:04:11 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.763 15:04:11 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:33.763 15:04:11 accel -- accel/accel.sh@41 -- # jq -r . 00:08:33.763 15:04:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.763 15:04:11 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.763 ************************************ 00:08:33.763 START TEST accel_dif_functional_tests 00:08:33.763 ************************************ 00:08:33.763 15:04:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:33.763 [2024-07-15 15:04:11.579200] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:33.763 [2024-07-15 15:04:11.579469] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66502 ] 00:08:33.763 [2024-07-15 15:04:11.755165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:34.021 [2024-07-15 15:04:12.062652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.021 [2024-07-15 15:04:12.062740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.021 [2024-07-15 15:04:12.062714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.587 00:08:34.587 00:08:34.587 CUnit - A unit testing framework for C - Version 2.1-3 00:08:34.587 http://cunit.sourceforge.net/ 00:08:34.587 00:08:34.587 00:08:34.587 Suite: accel_dif 00:08:34.587 Test: verify: DIF generated, GUARD check ...passed 00:08:34.587 Test: verify: DIF generated, APPTAG check ...passed 00:08:34.587 Test: verify: DIF generated, REFTAG check ...passed 00:08:34.587 Test: verify: DIF not generated, GUARD check ...[2024-07-15 15:04:12.517563] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:34.587 passed 00:08:34.587 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 15:04:12.518079] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:34.587 passed 00:08:34.587 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 15:04:12.518224] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:34.587 passed 00:08:34.588 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:34.588 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:08:34.588 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-15 15:04:12.518441] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:34.588 passed 00:08:34.588 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:34.588 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:34.588 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 15:04:12.518775] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:34.588 passed 00:08:34.588 Test: verify copy: DIF generated, GUARD check ...passed 00:08:34.588 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:34.588 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:34.588 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 15:04:12.519207] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:34.588 passed 00:08:34.588 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 15:04:12.519328] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:34.588 passed 00:08:34.588 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 15:04:12.519434] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:34.588 passed 00:08:34.588 Test: generate copy: DIF generated, GUARD check ...passed 00:08:34.588 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:34.588 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:34.588 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:34.588 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:34.588 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:34.588 Test: generate copy: iovecs-len validate ...[2024-07-15 15:04:12.520117] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:34.588 passed 00:08:34.588 Test: generate copy: buffer alignment validate ...passed 00:08:34.588 00:08:34.588 Run Summary: Type Total Ran Passed Failed Inactive 00:08:34.588 suites 1 1 n/a 0 0 00:08:34.588 tests 26 26 26 0 0 00:08:34.588 asserts 115 115 115 0 n/a 00:08:34.588 00:08:34.588 Elapsed time = 0.006 seconds 00:08:36.490 00:08:36.490 real 0m2.621s 00:08:36.490 user 0m5.053s 00:08:36.490 sys 0m0.364s 00:08:36.490 ************************************ 00:08:36.490 END TEST accel_dif_functional_tests 00:08:36.490 ************************************ 00:08:36.490 15:04:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.490 15:04:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:36.490 15:04:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:36.490 ************************************ 00:08:36.490 END TEST accel 00:08:36.490 ************************************ 00:08:36.490 00:08:36.490 real 1m11.108s 00:08:36.490 user 1m17.738s 00:08:36.490 sys 0m6.249s 00:08:36.490 15:04:14 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.490 15:04:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.490 15:04:14 -- common/autotest_common.sh@1142 -- # return 0 00:08:36.490 15:04:14 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:36.490 15:04:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:36.490 15:04:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.490 15:04:14 -- common/autotest_common.sh@10 -- # set +x 00:08:36.490 ************************************ 00:08:36.490 START TEST accel_rpc 00:08:36.490 ************************************ 00:08:36.490 15:04:14 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:36.490 * Looking for test storage... 00:08:36.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:36.490 15:04:14 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:36.490 15:04:14 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=66590 00:08:36.490 15:04:14 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:36.490 15:04:14 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 66590 00:08:36.490 15:04:14 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 66590 ']' 00:08:36.490 15:04:14 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.490 15:04:14 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.490 15:04:14 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.490 15:04:14 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.490 15:04:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.490 [2024-07-15 15:04:14.466508] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:36.490 [2024-07-15 15:04:14.466677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66590 ] 00:08:36.750 [2024-07-15 15:04:14.643323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.009 [2024-07-15 15:04:14.946724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.269 15:04:15 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.269 15:04:15 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:37.269 15:04:15 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:37.269 15:04:15 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:37.269 15:04:15 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:37.269 15:04:15 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:37.269 15:04:15 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:37.269 15:04:15 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:37.269 15:04:15 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.269 15:04:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.269 ************************************ 00:08:37.269 START TEST accel_assign_opcode 00:08:37.269 ************************************ 00:08:37.269 15:04:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:37.269 15:04:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:37.269 15:04:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.269 15:04:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:37.269 [2024-07-15 15:04:15.303049] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:37.269 15:04:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.269 15:04:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:37.269 15:04:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.269 15:04:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:37.269 [2024-07-15 15:04:15.311012] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:37.269 15:04:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.269 15:04:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:37.269 15:04:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.269 15:04:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:38.647 15:04:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.647 15:04:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:38.647 15:04:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:38.647 15:04:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.647 15:04:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:38.647 15:04:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:38.647 15:04:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.647 software 00:08:38.647 00:08:38.647 ************************************ 00:08:38.647 END TEST accel_assign_opcode 00:08:38.647 ************************************ 00:08:38.647 real 0m1.129s 00:08:38.647 user 0m0.050s 00:08:38.647 sys 0m0.015s 00:08:38.647 15:04:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.647 15:04:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:38.647 15:04:16 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:38.647 15:04:16 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 66590 00:08:38.647 15:04:16 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 66590 ']' 00:08:38.647 15:04:16 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 66590 00:08:38.647 15:04:16 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:38.647 15:04:16 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:38.647 15:04:16 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66590 00:08:38.647 killing process with pid 66590 00:08:38.647 15:04:16 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:38.647 15:04:16 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:38.647 15:04:16 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66590' 00:08:38.647 15:04:16 accel_rpc -- common/autotest_common.sh@967 -- # kill 66590 00:08:38.647 15:04:16 accel_rpc -- common/autotest_common.sh@972 -- # wait 66590 00:08:41.933 ************************************ 00:08:41.933 END TEST accel_rpc 00:08:41.933 ************************************ 00:08:41.933 00:08:41.933 real 0m5.263s 00:08:41.933 user 0m5.049s 00:08:41.933 sys 0m0.666s 00:08:41.933 15:04:19 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.933 15:04:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.933 15:04:19 -- common/autotest_common.sh@1142 -- # return 0 00:08:41.933 15:04:19 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:41.933 15:04:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:41.933 15:04:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.933 15:04:19 -- common/autotest_common.sh@10 -- # set +x 00:08:41.933 ************************************ 00:08:41.933 START TEST app_cmdline 00:08:41.933 ************************************ 00:08:41.933 15:04:19 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:41.933 * Looking for test storage... 00:08:41.933 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:41.933 15:04:19 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:41.933 15:04:19 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=66717 00:08:41.933 15:04:19 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:41.933 15:04:19 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 66717 00:08:41.933 15:04:19 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 66717 ']' 00:08:41.933 15:04:19 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.933 15:04:19 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.933 15:04:19 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.933 15:04:19 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.933 15:04:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:41.933 [2024-07-15 15:04:19.779068] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:41.933 [2024-07-15 15:04:19.779212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66717 ] 00:08:41.933 [2024-07-15 15:04:19.942316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.191 [2024-07-15 15:04:20.193736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.129 15:04:21 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.129 15:04:21 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:43.129 15:04:21 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:43.387 { 00:08:43.387 "version": "SPDK v24.09-pre git sha1 33d82c0da", 00:08:43.387 "fields": { 00:08:43.387 "major": 24, 00:08:43.387 "minor": 9, 00:08:43.388 "patch": 0, 00:08:43.388 "suffix": "-pre", 00:08:43.388 "commit": "33d82c0da" 00:08:43.388 } 00:08:43.388 } 00:08:43.388 15:04:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:43.388 15:04:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:43.388 15:04:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:43.388 15:04:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:43.388 15:04:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:43.388 15:04:21 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.388 15:04:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:43.388 15:04:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:43.388 15:04:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:43.388 15:04:21 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.388 15:04:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:43.388 15:04:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:43.388 15:04:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:43.388 15:04:21 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:43.388 15:04:21 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:43.388 15:04:21 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.388 15:04:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.388 15:04:21 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.388 15:04:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.388 15:04:21 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.388 15:04:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.388 15:04:21 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.388 15:04:21 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:43.388 15:04:21 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:43.646 request: 00:08:43.646 { 00:08:43.646 "method": "env_dpdk_get_mem_stats", 00:08:43.646 "req_id": 1 00:08:43.646 } 00:08:43.646 Got JSON-RPC error response 00:08:43.646 response: 00:08:43.646 { 00:08:43.646 "code": -32601, 00:08:43.646 "message": "Method not found" 00:08:43.646 } 00:08:43.646 15:04:21 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:43.646 15:04:21 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:43.646 15:04:21 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:43.646 15:04:21 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:43.646 15:04:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 66717 00:08:43.646 15:04:21 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 66717 ']' 00:08:43.646 15:04:21 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 66717 00:08:43.646 15:04:21 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:43.646 15:04:21 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.646 15:04:21 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66717 00:08:43.646 killing process with pid 66717 00:08:43.646 15:04:21 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:43.646 15:04:21 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:43.646 15:04:21 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66717' 00:08:43.646 15:04:21 app_cmdline -- common/autotest_common.sh@967 -- # kill 66717 00:08:43.646 15:04:21 app_cmdline -- common/autotest_common.sh@972 -- # wait 66717 00:08:46.179 ************************************ 00:08:46.179 END TEST app_cmdline 00:08:46.179 ************************************ 00:08:46.179 00:08:46.179 real 0m4.721s 00:08:46.179 user 0m5.021s 00:08:46.179 sys 0m0.575s 00:08:46.179 15:04:24 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:46.179 15:04:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:46.437 15:04:24 -- common/autotest_common.sh@1142 -- # return 0 00:08:46.437 15:04:24 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:46.437 15:04:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:46.437 15:04:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.437 15:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:46.437 ************************************ 00:08:46.437 START TEST version 00:08:46.437 ************************************ 00:08:46.437 15:04:24 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:46.437 * Looking for test storage... 00:08:46.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:46.437 15:04:24 version -- app/version.sh@17 -- # get_header_version major 00:08:46.437 15:04:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:46.437 15:04:24 version -- app/version.sh@14 -- # cut -f2 00:08:46.437 15:04:24 version -- app/version.sh@14 -- # tr -d '"' 00:08:46.437 15:04:24 version -- app/version.sh@17 -- # major=24 00:08:46.437 15:04:24 version -- app/version.sh@18 -- # get_header_version minor 00:08:46.437 15:04:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:46.437 15:04:24 version -- app/version.sh@14 -- # cut -f2 00:08:46.437 15:04:24 version -- app/version.sh@14 -- # tr -d '"' 00:08:46.437 15:04:24 version -- app/version.sh@18 -- # minor=9 00:08:46.437 15:04:24 version -- app/version.sh@19 -- # get_header_version patch 00:08:46.437 15:04:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:46.437 15:04:24 version -- app/version.sh@14 -- # cut -f2 00:08:46.437 15:04:24 version -- app/version.sh@14 -- # tr -d '"' 00:08:46.437 15:04:24 version -- app/version.sh@19 -- # patch=0 00:08:46.437 15:04:24 version -- app/version.sh@20 -- # get_header_version suffix 00:08:46.437 15:04:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:46.437 15:04:24 version -- app/version.sh@14 -- # cut -f2 00:08:46.437 15:04:24 version -- app/version.sh@14 -- # tr -d '"' 00:08:46.437 15:04:24 version -- app/version.sh@20 -- # suffix=-pre 00:08:46.437 15:04:24 version -- app/version.sh@22 -- # version=24.9 00:08:46.437 15:04:24 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:46.437 15:04:24 version -- app/version.sh@28 -- # version=24.9rc0 00:08:46.437 15:04:24 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:46.437 15:04:24 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:46.437 15:04:24 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:46.696 15:04:24 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:46.696 00:08:46.696 real 0m0.217s 00:08:46.696 user 0m0.105s 00:08:46.696 sys 0m0.162s 00:08:46.696 15:04:24 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:46.696 ************************************ 00:08:46.696 END TEST version 00:08:46.696 ************************************ 00:08:46.696 15:04:24 version -- common/autotest_common.sh@10 -- # set +x 00:08:46.696 15:04:24 -- common/autotest_common.sh@1142 -- # return 0 00:08:46.696 15:04:24 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:46.696 15:04:24 -- spdk/autotest.sh@198 -- # uname -s 00:08:46.696 15:04:24 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:46.696 15:04:24 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:46.696 15:04:24 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:46.696 15:04:24 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:08:46.696 15:04:24 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:46.696 15:04:24 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:46.696 15:04:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.696 15:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:46.696 ************************************ 00:08:46.696 START TEST blockdev_nvme 00:08:46.696 ************************************ 00:08:46.696 15:04:24 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:46.696 * Looking for test storage... 00:08:46.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:46.696 15:04:24 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66896 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:46.696 15:04:24 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 66896 00:08:46.696 15:04:24 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 66896 ']' 00:08:46.696 15:04:24 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.696 15:04:24 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.696 15:04:24 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.696 15:04:24 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.696 15:04:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.955 [2024-07-15 15:04:24.865367] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:46.955 [2024-07-15 15:04:24.865632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66896 ] 00:08:46.955 [2024-07-15 15:04:25.036929] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.213 [2024-07-15 15:04:25.282163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.148 15:04:26 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:48.148 15:04:26 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:08:48.148 15:04:26 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:48.148 15:04:26 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:08:48.148 15:04:26 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:48.148 15:04:26 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:48.148 15:04:26 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:48.407 15:04:26 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:48.407 15:04:26 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.407 15:04:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.665 15:04:26 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.665 15:04:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:08:48.665 15:04:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.665 15:04:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.665 15:04:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.665 15:04:26 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:48.665 15:04:26 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:48.665 15:04:26 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.665 15:04:26 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.925 15:04:26 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:48.925 15:04:26 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:48.926 15:04:26 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4de68c5f-b318-446a-b59a-769ead9b8883"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4de68c5f-b318-446a-b59a-769ead9b8883",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "771165cb-bc21-435b-ba6f-0e354e15dee5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "771165cb-bc21-435b-ba6f-0e354e15dee5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8e4e3504-18f4-4a59-8c5e-2c477b5f84f8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8e4e3504-18f4-4a59-8c5e-2c477b5f84f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "27bb5bb8-5682-42a2-959f-0c24660c8f95"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "27bb5bb8-5682-42a2-959f-0c24660c8f95",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d1bffada-60a1-47b0-9bd7-bb0fd9446858"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d1bffada-60a1-47b0-9bd7-bb0fd9446858",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8a23a5f3-d798-4cf7-99c4-c5b1f7de683b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8a23a5f3-d798-4cf7-99c4-c5b1f7de683b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:48.926 15:04:26 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:48.926 15:04:26 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:48.926 15:04:26 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:48.926 15:04:26 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 66896 00:08:48.926 15:04:26 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 66896 ']' 00:08:48.926 15:04:26 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 66896 00:08:48.926 15:04:26 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:08:48.926 15:04:26 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:48.926 15:04:26 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66896 00:08:48.926 killing process with pid 66896 00:08:48.926 15:04:26 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:48.926 15:04:26 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:48.926 15:04:26 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66896' 00:08:48.926 15:04:26 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 66896 00:08:48.926 15:04:26 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 66896 00:08:51.475 15:04:29 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:51.475 15:04:29 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:51.475 15:04:29 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:51.475 15:04:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.475 15:04:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.475 ************************************ 00:08:51.475 START TEST bdev_hello_world 00:08:51.475 ************************************ 00:08:51.475 15:04:29 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:51.475 [2024-07-15 15:04:29.459941] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:51.475 [2024-07-15 15:04:29.460102] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66991 ] 00:08:51.732 [2024-07-15 15:04:29.636728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.989 [2024-07-15 15:04:29.871839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.554 [2024-07-15 15:04:30.552807] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:52.554 [2024-07-15 15:04:30.552862] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:52.554 [2024-07-15 15:04:30.552882] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:52.554 [2024-07-15 15:04:30.555531] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:52.554 [2024-07-15 15:04:30.556107] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:52.554 [2024-07-15 15:04:30.556137] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:52.554 [2024-07-15 15:04:30.556379] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:52.554 00:08:52.554 [2024-07-15 15:04:30.556402] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:53.926 ************************************ 00:08:53.926 END TEST bdev_hello_world 00:08:53.926 ************************************ 00:08:53.926 00:08:53.926 real 0m2.374s 00:08:53.926 user 0m2.016s 00:08:53.926 sys 0m0.252s 00:08:53.926 15:04:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.926 15:04:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:53.926 15:04:31 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:53.926 15:04:31 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:53.926 15:04:31 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:53.926 15:04:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.926 15:04:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.926 ************************************ 00:08:53.926 START TEST bdev_bounds 00:08:53.926 ************************************ 00:08:53.926 15:04:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:08:53.926 Process bdevio pid: 67039 00:08:53.926 15:04:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=67039 00:08:53.926 15:04:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:53.926 15:04:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:53.926 15:04:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 67039' 00:08:53.926 15:04:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 67039 00:08:53.926 15:04:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 67039 ']' 00:08:53.926 15:04:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.926 15:04:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.926 15:04:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.926 15:04:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.926 15:04:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:53.926 [2024-07-15 15:04:31.904430] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:53.926 [2024-07-15 15:04:31.904693] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67039 ] 00:08:54.185 [2024-07-15 15:04:32.080380] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:54.451 [2024-07-15 15:04:32.320127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.451 [2024-07-15 15:04:32.320249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.451 [2024-07-15 15:04:32.320291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.028 15:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:55.028 15:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:08:55.028 15:04:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:55.286 I/O targets: 00:08:55.286 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:55.286 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:55.286 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:55.286 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:55.286 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:55.286 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:55.286 00:08:55.286 00:08:55.286 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.286 http://cunit.sourceforge.net/ 00:08:55.286 00:08:55.286 00:08:55.286 Suite: bdevio tests on: Nvme3n1 00:08:55.286 Test: blockdev write read block ...passed 00:08:55.286 Test: blockdev write zeroes read block ...passed 00:08:55.286 Test: blockdev write zeroes read no split ...passed 00:08:55.286 Test: blockdev write zeroes read split ...passed 00:08:55.286 Test: blockdev write zeroes read split partial ...passed 00:08:55.286 Test: blockdev reset ...[2024-07-15 15:04:33.266343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:55.286 [2024-07-15 15:04:33.270416] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.286 passed 00:08:55.286 Test: blockdev write read 8 blocks ...passed 00:08:55.286 Test: blockdev write read size > 128k ...passed 00:08:55.286 Test: blockdev write read invalid size ...passed 00:08:55.286 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.286 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.287 Test: blockdev write read max offset ...passed 00:08:55.287 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.287 Test: blockdev writev readv 8 blocks ...passed 00:08:55.287 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.287 Test: blockdev writev readv block ...passed 00:08:55.287 Test: blockdev writev readv size > 128k ...passed 00:08:55.287 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.287 Test: blockdev comparev and writev ...[2024-07-15 15:04:33.278395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26720a000 len:0x1000 00:08:55.287 [2024-07-15 15:04:33.278509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.287 passed 00:08:55.287 Test: blockdev nvme passthru rw ...passed 00:08:55.287 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:04:33.279319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:55.287 [2024-07-15 15:04:33.279402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:55.287 passed 00:08:55.287 Test: blockdev nvme admin passthru ...passed 00:08:55.287 Test: blockdev copy ...passed 00:08:55.287 Suite: bdevio tests on: Nvme2n3 00:08:55.287 Test: blockdev write read block ...passed 00:08:55.287 Test: blockdev write zeroes read block ...passed 00:08:55.287 Test: blockdev write zeroes read no split ...passed 00:08:55.287 Test: blockdev write zeroes read split ...passed 00:08:55.287 Test: blockdev write zeroes read split partial ...passed 00:08:55.287 Test: blockdev reset ...[2024-07-15 15:04:33.365773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:55.287 [2024-07-15 15:04:33.369958] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.287 passed 00:08:55.287 Test: blockdev write read 8 blocks ...passed 00:08:55.287 Test: blockdev write read size > 128k ...passed 00:08:55.287 Test: blockdev write read invalid size ...passed 00:08:55.287 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.287 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.287 Test: blockdev write read max offset ...passed 00:08:55.287 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.287 Test: blockdev writev readv 8 blocks ...passed 00:08:55.287 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.287 Test: blockdev writev readv block ...passed 00:08:55.287 Test: blockdev writev readv size > 128k ...passed 00:08:55.287 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.287 Test: blockdev comparev and writev ...[2024-07-15 15:04:33.378699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x276a04000 len:0x1000 00:08:55.287 [2024-07-15 15:04:33.378812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.287 passed 00:08:55.287 Test: blockdev nvme passthru rw ...passed 00:08:55.287 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:04:33.379813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:55.287 [2024-07-15 15:04:33.379895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:55.287 passed 00:08:55.287 Test: blockdev nvme admin passthru ...passed 00:08:55.287 Test: blockdev copy ...passed 00:08:55.287 Suite: bdevio tests on: Nvme2n2 00:08:55.287 Test: blockdev write read block ...passed 00:08:55.287 Test: blockdev write zeroes read block ...passed 00:08:55.546 Test: blockdev write zeroes read no split ...passed 00:08:55.546 Test: blockdev write zeroes read split ...passed 00:08:55.546 Test: blockdev write zeroes read split partial ...passed 00:08:55.546 Test: blockdev reset ...[2024-07-15 15:04:33.495219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:55.547 [2024-07-15 15:04:33.499332] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.547 passed 00:08:55.547 Test: blockdev write read 8 blocks ...passed 00:08:55.547 Test: blockdev write read size > 128k ...passed 00:08:55.547 Test: blockdev write read invalid size ...passed 00:08:55.547 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.547 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.547 Test: blockdev write read max offset ...passed 00:08:55.547 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.547 Test: blockdev writev readv 8 blocks ...passed 00:08:55.547 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.547 Test: blockdev writev readv block ...passed 00:08:55.547 Test: blockdev writev readv size > 128k ...passed 00:08:55.547 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.547 Test: blockdev comparev and writev ...[2024-07-15 15:04:33.506502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x272c3a000 len:0x1000 00:08:55.547 [2024-07-15 15:04:33.506614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.547 passed 00:08:55.547 Test: blockdev nvme passthru rw ...passed 00:08:55.547 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:04:33.507453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:55.547 [2024-07-15 15:04:33.507535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:55.547 passed 00:08:55.547 Test: blockdev nvme admin passthru ...passed 00:08:55.547 Test: blockdev copy ...passed 00:08:55.547 Suite: bdevio tests on: Nvme2n1 00:08:55.547 Test: blockdev write read block ...passed 00:08:55.547 Test: blockdev write zeroes read block ...passed 00:08:55.547 Test: blockdev write zeroes read no split ...passed 00:08:55.547 Test: blockdev write zeroes read split ...passed 00:08:55.547 Test: blockdev write zeroes read split partial ...passed 00:08:55.547 Test: blockdev reset ...[2024-07-15 15:04:33.595229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:55.547 [2024-07-15 15:04:33.599570] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.547 passed 00:08:55.547 Test: blockdev write read 8 blocks ...passed 00:08:55.547 Test: blockdev write read size > 128k ...passed 00:08:55.547 Test: blockdev write read invalid size ...passed 00:08:55.547 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.547 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.547 Test: blockdev write read max offset ...passed 00:08:55.547 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.547 Test: blockdev writev readv 8 blocks ...passed 00:08:55.547 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.547 Test: blockdev writev readv block ...passed 00:08:55.547 Test: blockdev writev readv size > 128k ...passed 00:08:55.547 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.547 Test: blockdev comparev and writev ...[2024-07-15 15:04:33.607302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x272c34000 len:0x1000 00:08:55.547 [2024-07-15 15:04:33.607402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.547 passed 00:08:55.547 Test: blockdev nvme passthru rw ...passed 00:08:55.547 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:04:33.608287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:55.547 [2024-07-15 15:04:33.608371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:55.547 passed 00:08:55.547 Test: blockdev nvme admin passthru ...passed 00:08:55.547 Test: blockdev copy ...passed 00:08:55.547 Suite: bdevio tests on: Nvme1n1 00:08:55.547 Test: blockdev write read block ...passed 00:08:55.547 Test: blockdev write zeroes read block ...passed 00:08:55.547 Test: blockdev write zeroes read no split ...passed 00:08:55.805 Test: blockdev write zeroes read split ...passed 00:08:55.805 Test: blockdev write zeroes read split partial ...passed 00:08:55.805 Test: blockdev reset ...[2024-07-15 15:04:33.690903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:55.805 [2024-07-15 15:04:33.694493] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.805 passed 00:08:55.805 Test: blockdev write read 8 blocks ...passed 00:08:55.805 Test: blockdev write read size > 128k ...passed 00:08:55.805 Test: blockdev write read invalid size ...passed 00:08:55.805 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.805 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.805 Test: blockdev write read max offset ...passed 00:08:55.805 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.805 Test: blockdev writev readv 8 blocks ...passed 00:08:55.805 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.805 Test: blockdev writev readv block ...passed 00:08:55.805 Test: blockdev writev readv size > 128k ...passed 00:08:55.805 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.805 Test: blockdev comparev and writev ...[2024-07-15 15:04:33.702948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x272c30000 len:0x1000 00:08:55.805 [2024-07-15 15:04:33.703072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.805 passed 00:08:55.805 Test: blockdev nvme passthru rw ...passed 00:08:55.805 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:04:33.704107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:55.805 [2024-07-15 15:04:33.704190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:55.805 passed 00:08:55.805 Test: blockdev nvme admin passthru ...passed 00:08:55.805 Test: blockdev copy ...passed 00:08:55.805 Suite: bdevio tests on: Nvme0n1 00:08:55.805 Test: blockdev write read block ...passed 00:08:55.805 Test: blockdev write zeroes read block ...passed 00:08:55.805 Test: blockdev write zeroes read no split ...passed 00:08:55.805 Test: blockdev write zeroes read split ...passed 00:08:55.805 Test: blockdev write zeroes read split partial ...passed 00:08:55.805 Test: blockdev reset ...[2024-07-15 15:04:33.797422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:55.805 [2024-07-15 15:04:33.801362] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.805 passed 00:08:55.805 Test: blockdev write read 8 blocks ...passed 00:08:55.805 Test: blockdev write read size > 128k ...passed 00:08:55.805 Test: blockdev write read invalid size ...passed 00:08:55.805 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.805 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.805 Test: blockdev write read max offset ...passed 00:08:55.805 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.805 Test: blockdev writev readv 8 blocks ...passed 00:08:55.805 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.805 Test: blockdev writev readv block ...passed 00:08:55.805 Test: blockdev writev readv size > 128k ...passed 00:08:55.805 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.805 Test: blockdev comparev and writev ...[2024-07-15 15:04:33.809001] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 spassedince it has 00:08:55.805 separate metadata which is not supported yet. 00:08:55.805 00:08:55.805 Test: blockdev nvme passthru rw ...passed 00:08:55.805 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:04:33.809526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:55.805 [2024-07-15 15:04:33.809651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:55.805 passed 00:08:55.805 Test: blockdev nvme admin passthru ...passed 00:08:55.805 Test: blockdev copy ...passed 00:08:55.805 00:08:55.805 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.805 suites 6 6 n/a 0 0 00:08:55.805 tests 138 138 138 0 0 00:08:55.805 asserts 893 893 893 0 n/a 00:08:55.805 00:08:55.805 Elapsed time = 1.728 seconds 00:08:55.805 0 00:08:55.805 15:04:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 67039 00:08:55.805 15:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 67039 ']' 00:08:55.805 15:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 67039 00:08:55.805 15:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:08:55.805 15:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:55.805 15:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67039 00:08:55.805 15:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:55.805 15:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:55.805 15:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67039' 00:08:55.805 killing process with pid 67039 00:08:55.805 15:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 67039 00:08:55.805 15:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 67039 00:08:57.183 15:04:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:57.183 00:08:57.183 real 0m3.207s 00:08:57.183 user 0m7.903s 00:08:57.183 sys 0m0.389s 00:08:57.183 15:04:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.183 15:04:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 ************************************ 00:08:57.183 END TEST bdev_bounds 00:08:57.183 ************************************ 00:08:57.183 15:04:35 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:57.183 15:04:35 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:57.183 15:04:35 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:57.183 15:04:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.183 15:04:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 ************************************ 00:08:57.183 START TEST bdev_nbd 00:08:57.183 ************************************ 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=67104 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 67104 /var/tmp/spdk-nbd.sock 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 67104 ']' 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:57.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.183 15:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 [2024-07-15 15:04:35.187469] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:08:57.183 [2024-07-15 15:04:35.187636] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.441 [2024-07-15 15:04:35.337578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.698 [2024-07-15 15:04:35.574001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.266 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.266 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:08:58.266 15:04:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:58.266 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.266 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:58.266 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:58.266 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:58.266 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.266 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:58.266 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:58.266 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:58.266 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:58.266 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:58.266 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:58.266 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:58.524 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.525 1+0 records in 00:08:58.525 1+0 records out 00:08:58.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730316 s, 5.6 MB/s 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:58.525 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.784 1+0 records in 00:08:58.784 1+0 records out 00:08:58.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570789 s, 7.2 MB/s 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:58.784 15:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.042 1+0 records in 00:08:59.042 1+0 records out 00:08:59.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575171 s, 7.1 MB/s 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:59.042 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.301 1+0 records in 00:08:59.301 1+0 records out 00:08:59.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744497 s, 5.5 MB/s 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:59.301 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.560 1+0 records in 00:08:59.560 1+0 records out 00:08:59.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476265 s, 8.6 MB/s 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:59.560 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.819 1+0 records in 00:08:59.819 1+0 records out 00:08:59.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488477 s, 8.4 MB/s 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:59.819 { 00:08:59.819 "nbd_device": "/dev/nbd0", 00:08:59.819 "bdev_name": "Nvme0n1" 00:08:59.819 }, 00:08:59.819 { 00:08:59.819 "nbd_device": "/dev/nbd1", 00:08:59.819 "bdev_name": "Nvme1n1" 00:08:59.819 }, 00:08:59.819 { 00:08:59.819 "nbd_device": "/dev/nbd2", 00:08:59.819 "bdev_name": "Nvme2n1" 00:08:59.819 }, 00:08:59.819 { 00:08:59.819 "nbd_device": "/dev/nbd3", 00:08:59.819 "bdev_name": "Nvme2n2" 00:08:59.819 }, 00:08:59.819 { 00:08:59.819 "nbd_device": "/dev/nbd4", 00:08:59.819 "bdev_name": "Nvme2n3" 00:08:59.819 }, 00:08:59.819 { 00:08:59.819 "nbd_device": "/dev/nbd5", 00:08:59.819 "bdev_name": "Nvme3n1" 00:08:59.819 } 00:08:59.819 ]' 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:59.819 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:59.819 { 00:08:59.819 "nbd_device": "/dev/nbd0", 00:08:59.819 "bdev_name": "Nvme0n1" 00:08:59.819 }, 00:08:59.819 { 00:08:59.819 "nbd_device": "/dev/nbd1", 00:08:59.819 "bdev_name": "Nvme1n1" 00:08:59.819 }, 00:08:59.819 { 00:08:59.819 "nbd_device": "/dev/nbd2", 00:08:59.819 "bdev_name": "Nvme2n1" 00:08:59.819 }, 00:08:59.819 { 00:08:59.819 "nbd_device": "/dev/nbd3", 00:08:59.819 "bdev_name": "Nvme2n2" 00:08:59.819 }, 00:08:59.819 { 00:08:59.819 "nbd_device": "/dev/nbd4", 00:08:59.819 "bdev_name": "Nvme2n3" 00:08:59.819 }, 00:08:59.819 { 00:08:59.819 "nbd_device": "/dev/nbd5", 00:08:59.819 "bdev_name": "Nvme3n1" 00:08:59.819 } 00:08:59.819 ]' 00:09:00.079 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:00.079 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:00.079 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.079 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:00.079 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:00.079 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:00.079 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.079 15:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:00.079 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:00.079 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:00.079 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:00.079 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.079 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.079 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:00.079 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:00.079 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.079 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.079 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:00.336 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:00.337 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:00.337 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:00.337 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.337 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.337 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:00.337 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:00.337 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.337 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.337 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:00.616 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:00.616 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:00.616 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:00.616 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.616 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.617 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:00.617 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:00.617 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.617 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.617 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:00.875 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:00.875 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:00.875 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:00.875 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.875 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.875 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:00.875 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:00.875 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.875 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.875 15:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.133 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:01.392 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:01.652 /dev/nbd0 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.652 1+0 records in 00:09:01.652 1+0 records out 00:09:01.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000782093 s, 5.2 MB/s 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:01.652 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:01.912 /dev/nbd1 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.912 1+0 records in 00:09:01.912 1+0 records out 00:09:01.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000739794 s, 5.5 MB/s 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:01.912 15:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:02.172 /dev/nbd10 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.172 1+0 records in 00:09:02.172 1+0 records out 00:09:02.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531675 s, 7.7 MB/s 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:02.172 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:02.431 /dev/nbd11 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.431 1+0 records in 00:09:02.431 1+0 records out 00:09:02.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538285 s, 7.6 MB/s 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:02.431 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:02.691 /dev/nbd12 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.691 1+0 records in 00:09:02.691 1+0 records out 00:09:02.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679678 s, 6.0 MB/s 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:02.691 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:02.691 /dev/nbd13 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.950 1+0 records in 00:09:02.950 1+0 records out 00:09:02.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000712757 s, 5.7 MB/s 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.950 15:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:02.950 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:02.950 { 00:09:02.950 "nbd_device": "/dev/nbd0", 00:09:02.950 "bdev_name": "Nvme0n1" 00:09:02.950 }, 00:09:02.950 { 00:09:02.950 "nbd_device": "/dev/nbd1", 00:09:02.950 "bdev_name": "Nvme1n1" 00:09:02.950 }, 00:09:02.950 { 00:09:02.950 "nbd_device": "/dev/nbd10", 00:09:02.950 "bdev_name": "Nvme2n1" 00:09:02.950 }, 00:09:02.950 { 00:09:02.950 "nbd_device": "/dev/nbd11", 00:09:02.950 "bdev_name": "Nvme2n2" 00:09:02.950 }, 00:09:02.950 { 00:09:02.950 "nbd_device": "/dev/nbd12", 00:09:02.950 "bdev_name": "Nvme2n3" 00:09:02.950 }, 00:09:02.950 { 00:09:02.950 "nbd_device": "/dev/nbd13", 00:09:02.950 "bdev_name": "Nvme3n1" 00:09:02.950 } 00:09:02.950 ]' 00:09:02.950 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:02.950 { 00:09:02.950 "nbd_device": "/dev/nbd0", 00:09:02.950 "bdev_name": "Nvme0n1" 00:09:02.950 }, 00:09:02.950 { 00:09:02.950 "nbd_device": "/dev/nbd1", 00:09:02.950 "bdev_name": "Nvme1n1" 00:09:02.950 }, 00:09:02.950 { 00:09:02.950 "nbd_device": "/dev/nbd10", 00:09:02.950 "bdev_name": "Nvme2n1" 00:09:02.950 }, 00:09:02.950 { 00:09:02.950 "nbd_device": "/dev/nbd11", 00:09:02.950 "bdev_name": "Nvme2n2" 00:09:02.950 }, 00:09:02.950 { 00:09:02.950 "nbd_device": "/dev/nbd12", 00:09:02.950 "bdev_name": "Nvme2n3" 00:09:02.950 }, 00:09:02.950 { 00:09:02.950 "nbd_device": "/dev/nbd13", 00:09:02.950 "bdev_name": "Nvme3n1" 00:09:02.950 } 00:09:02.950 ]' 00:09:02.950 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:03.210 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:03.210 /dev/nbd1 00:09:03.210 /dev/nbd10 00:09:03.210 /dev/nbd11 00:09:03.210 /dev/nbd12 00:09:03.210 /dev/nbd13' 00:09:03.210 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:03.210 /dev/nbd1 00:09:03.210 /dev/nbd10 00:09:03.210 /dev/nbd11 00:09:03.210 /dev/nbd12 00:09:03.210 /dev/nbd13' 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:03.211 256+0 records in 00:09:03.211 256+0 records out 00:09:03.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00519178 s, 202 MB/s 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:03.211 256+0 records in 00:09:03.211 256+0 records out 00:09:03.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0870613 s, 12.0 MB/s 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:03.211 256+0 records in 00:09:03.211 256+0 records out 00:09:03.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0994857 s, 10.5 MB/s 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.211 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:03.470 256+0 records in 00:09:03.470 256+0 records out 00:09:03.470 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0973897 s, 10.8 MB/s 00:09:03.470 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.470 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:03.470 256+0 records in 00:09:03.470 256+0 records out 00:09:03.470 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0965299 s, 10.9 MB/s 00:09:03.470 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.470 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:03.730 256+0 records in 00:09:03.730 256+0 records out 00:09:03.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0939378 s, 11.2 MB/s 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:03.730 256+0 records in 00:09:03.730 256+0 records out 00:09:03.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0954834 s, 11.0 MB/s 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.730 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:03.989 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:03.989 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:03.989 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:03.989 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.989 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.989 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:03.989 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:03.989 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.989 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.989 15:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.249 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:04.508 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:04.508 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:04.508 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:04.508 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.508 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.508 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:04.508 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:04.508 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.508 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.508 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:04.767 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:04.767 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:04.767 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:04.767 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.767 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.767 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:04.767 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:04.767 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.767 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.767 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:05.025 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:05.025 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:05.025 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:05.025 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.025 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.025 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:05.025 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:05.025 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.025 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:05.025 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.025 15:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:05.283 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:05.283 malloc_lvol_verify 00:09:05.541 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:05.541 f9909db6-1bf0-4212-adab-1ab7b72bc3f5 00:09:05.541 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:05.800 9e2c4b76-d4fd-407c-bea7-0d44bf03c959 00:09:05.800 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:06.058 /dev/nbd0 00:09:06.058 15:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:06.058 mke2fs 1.46.5 (30-Dec-2021) 00:09:06.058 Discarding device blocks: 0/4096 done 00:09:06.058 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:06.058 00:09:06.058 Allocating group tables: 0/1 done 00:09:06.059 Writing inode tables: 0/1 done 00:09:06.059 Creating journal (1024 blocks): done 00:09:06.059 Writing superblocks and filesystem accounting information: 0/1 done 00:09:06.059 00:09:06.059 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:06.059 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:06.059 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.059 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:06.059 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:06.059 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:06.059 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.059 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 67104 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 67104 ']' 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 67104 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67104 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67104' 00:09:06.355 killing process with pid 67104 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 67104 00:09:06.355 15:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 67104 00:09:07.756 15:04:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:07.756 00:09:07.756 real 0m10.412s 00:09:07.756 user 0m14.094s 00:09:07.756 sys 0m3.547s 00:09:07.756 15:04:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.756 15:04:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:07.756 ************************************ 00:09:07.756 END TEST bdev_nbd 00:09:07.756 ************************************ 00:09:07.756 15:04:45 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:07.756 15:04:45 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:07.756 15:04:45 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:09:07.756 skipping fio tests on NVMe due to multi-ns failures. 00:09:07.756 15:04:45 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:07.756 15:04:45 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:07.756 15:04:45 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:07.756 15:04:45 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:07.756 15:04:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.756 15:04:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:07.757 ************************************ 00:09:07.757 START TEST bdev_verify 00:09:07.757 ************************************ 00:09:07.757 15:04:45 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:07.757 [2024-07-15 15:04:45.650254] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:09:07.757 [2024-07-15 15:04:45.650380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67485 ] 00:09:07.757 [2024-07-15 15:04:45.813861] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:08.016 [2024-07-15 15:04:46.047072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.016 [2024-07-15 15:04:46.047095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.955 Running I/O for 5 seconds... 00:09:14.230 00:09:14.230 Latency(us) 00:09:14.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.230 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.230 Verification LBA range: start 0x0 length 0xbd0bd 00:09:14.230 Nvme0n1 : 5.05 1709.23 6.68 0.00 0.00 74515.80 10703.26 79215.57 00:09:14.230 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.230 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:14.230 Nvme0n1 : 5.06 1721.55 6.72 0.00 0.00 73995.21 15911.80 75552.42 00:09:14.230 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.230 Verification LBA range: start 0x0 length 0xa0000 00:09:14.230 Nvme1n1 : 5.07 1717.27 6.71 0.00 0.00 74235.21 10359.84 72347.17 00:09:14.230 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.230 Verification LBA range: start 0xa0000 length 0xa0000 00:09:14.230 Nvme1n1 : 5.07 1729.27 6.75 0.00 0.00 73664.67 5809.52 67768.23 00:09:14.230 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.230 Verification LBA range: start 0x0 length 0x80000 00:09:14.230 Nvme2n1 : 5.07 1716.72 6.71 0.00 0.00 74043.51 10817.73 71431.38 00:09:14.230 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.230 Verification LBA range: start 0x80000 length 0x80000 00:09:14.230 Nvme2n1 : 5.07 1728.42 6.75 0.00 0.00 73568.24 7068.73 68226.12 00:09:14.230 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.230 Verification LBA range: start 0x0 length 0x80000 00:09:14.230 Nvme2n2 : 5.07 1715.89 6.70 0.00 0.00 73914.28 12076.94 70973.48 00:09:14.230 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.230 Verification LBA range: start 0x80000 length 0x80000 00:09:14.230 Nvme2n2 : 5.07 1727.70 6.75 0.00 0.00 73459.12 7841.43 68684.02 00:09:14.230 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.230 Verification LBA range: start 0x0 length 0x80000 00:09:14.230 Nvme2n3 : 5.07 1715.18 6.70 0.00 0.00 73772.37 12706.54 70515.59 00:09:14.230 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.230 Verification LBA range: start 0x80000 length 0x80000 00:09:14.230 Nvme2n3 : 5.08 1727.04 6.75 0.00 0.00 73320.95 8585.50 70057.70 00:09:14.230 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.230 Verification LBA range: start 0x0 length 0x20000 00:09:14.230 Nvme3n1 : 5.08 1714.53 6.70 0.00 0.00 73664.65 11905.23 72347.17 00:09:14.230 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.230 Verification LBA range: start 0x20000 length 0x20000 00:09:14.230 Nvme3n1 : 5.08 1726.39 6.74 0.00 0.00 73184.54 9329.58 71889.27 00:09:14.230 =================================================================================================================== 00:09:14.230 Total : 20649.20 80.66 0.00 0.00 73776.72 5809.52 79215.57 00:09:15.609 00:09:15.609 real 0m8.121s 00:09:15.609 user 0m14.840s 00:09:15.609 sys 0m0.268s 00:09:15.609 15:04:53 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.609 15:04:53 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:15.609 ************************************ 00:09:15.609 END TEST bdev_verify 00:09:15.609 ************************************ 00:09:15.868 15:04:53 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:15.868 15:04:53 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:15.868 15:04:53 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:15.868 15:04:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.868 15:04:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:15.868 ************************************ 00:09:15.868 START TEST bdev_verify_big_io 00:09:15.868 ************************************ 00:09:15.868 15:04:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:15.868 [2024-07-15 15:04:53.831832] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:09:15.868 [2024-07-15 15:04:53.831950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67594 ] 00:09:16.127 [2024-07-15 15:04:53.995732] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:16.128 [2024-07-15 15:04:54.225195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.128 [2024-07-15 15:04:54.225231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.065 Running I/O for 5 seconds... 00:09:23.628 00:09:23.628 Latency(us) 00:09:23.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.628 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.628 Verification LBA range: start 0x0 length 0xbd0b 00:09:23.628 Nvme0n1 : 5.46 211.15 13.20 0.00 0.00 591798.22 26672.29 655703.42 00:09:23.628 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.628 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:23.628 Nvme0n1 : 5.67 111.78 6.99 0.00 0.00 1125570.71 25985.45 1252796.48 00:09:23.628 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.628 Verification LBA range: start 0x0 length 0xa000 00:09:23.628 Nvme1n1 : 5.46 210.88 13.18 0.00 0.00 576736.48 67768.23 534819.55 00:09:23.628 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.628 Verification LBA range: start 0xa000 length 0xa000 00:09:23.628 Nvme1n1 : 5.67 109.80 6.86 0.00 0.00 1110777.21 25870.98 1106270.57 00:09:23.628 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.628 Verification LBA range: start 0x0 length 0x8000 00:09:23.628 Nvme2n1 : 5.53 211.40 13.21 0.00 0.00 562422.11 65478.76 505514.37 00:09:23.628 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.628 Verification LBA range: start 0x8000 length 0x8000 00:09:23.628 Nvme2n1 : 5.68 109.24 6.83 0.00 0.00 1057191.41 25527.56 1084291.69 00:09:23.628 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.628 Verification LBA range: start 0x0 length 0x8000 00:09:23.628 Nvme2n2 : 5.56 216.23 13.51 0.00 0.00 545366.76 20605.21 556798.43 00:09:23.628 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.628 Verification LBA range: start 0x8000 length 0x8000 00:09:23.628 Nvme2n2 : 5.81 129.36 8.09 0.00 0.00 864881.62 29534.13 1098944.28 00:09:23.628 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.628 Verification LBA range: start 0x0 length 0x8000 00:09:23.628 Nvme2n3 : 5.57 215.46 13.47 0.00 0.00 539739.19 7440.77 1076965.39 00:09:23.628 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.628 Verification LBA range: start 0x8000 length 0x8000 00:09:23.628 Nvme2n3 : 5.87 147.78 9.24 0.00 0.00 730249.11 10016.42 1245470.18 00:09:23.628 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.628 Verification LBA range: start 0x0 length 0x2000 00:09:23.628 Nvme3n1 : 5.58 229.82 14.36 0.00 0.00 496806.27 5695.05 600756.21 00:09:23.628 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.628 Verification LBA range: start 0x2000 length 0x2000 00:09:23.628 Nvme3n1 : 6.02 233.73 14.61 0.00 0.00 445325.39 615.29 1186859.82 00:09:23.628 =================================================================================================================== 00:09:23.628 Total : 2136.63 133.54 0.00 0.00 656229.86 615.29 1252796.48 00:09:25.531 00:09:25.531 real 0m9.645s 00:09:25.531 user 0m17.844s 00:09:25.531 sys 0m0.288s 00:09:25.531 15:05:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.531 15:05:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:25.531 ************************************ 00:09:25.531 END TEST bdev_verify_big_io 00:09:25.531 ************************************ 00:09:25.531 15:05:03 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:25.531 15:05:03 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:25.531 15:05:03 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:25.531 15:05:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.531 15:05:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.531 ************************************ 00:09:25.531 START TEST bdev_write_zeroes 00:09:25.531 ************************************ 00:09:25.531 15:05:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:25.531 [2024-07-15 15:05:03.546113] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:09:25.531 [2024-07-15 15:05:03.546235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67709 ] 00:09:25.790 [2024-07-15 15:05:03.709978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.049 [2024-07-15 15:05:03.942301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.633 Running I/O for 1 seconds... 00:09:28.038 00:09:28.038 Latency(us) 00:09:28.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.038 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:28.038 Nvme0n1 : 1.01 11044.64 43.14 0.00 0.00 11557.57 8928.92 31136.75 00:09:28.038 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:28.038 Nvme1n1 : 1.02 11031.90 43.09 0.00 0.00 11554.55 9329.58 30907.81 00:09:28.038 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:28.038 Nvme2n1 : 1.02 11019.48 43.04 0.00 0.00 11507.00 8928.92 27588.08 00:09:28.038 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:28.038 Nvme2n2 : 1.02 11074.81 43.26 0.00 0.00 11394.72 4550.32 22780.20 00:09:28.038 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:28.038 Nvme2n3 : 1.02 11063.17 43.22 0.00 0.00 11373.09 4950.97 21635.47 00:09:28.038 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:28.038 Nvme3n1 : 1.02 11052.05 43.17 0.00 0.00 11352.00 5151.30 20261.79 00:09:28.038 =================================================================================================================== 00:09:28.038 Total : 66286.04 258.93 0.00 0.00 11456.01 4550.32 31136.75 00:09:28.972 00:09:28.972 real 0m3.614s 00:09:28.972 user 0m3.258s 00:09:28.972 sys 0m0.238s 00:09:28.972 15:05:07 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.972 15:05:07 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:28.972 ************************************ 00:09:28.972 END TEST bdev_write_zeroes 00:09:28.972 ************************************ 00:09:29.230 15:05:07 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:29.230 15:05:07 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:29.230 15:05:07 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:29.230 15:05:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.230 15:05:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:29.230 ************************************ 00:09:29.230 START TEST bdev_json_nonenclosed 00:09:29.230 ************************************ 00:09:29.230 15:05:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:29.230 [2024-07-15 15:05:07.223263] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:09:29.230 [2024-07-15 15:05:07.223405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67773 ] 00:09:29.489 [2024-07-15 15:05:07.387964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.763 [2024-07-15 15:05:07.618809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.763 [2024-07-15 15:05:07.618898] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:29.763 [2024-07-15 15:05:07.618918] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:29.763 [2024-07-15 15:05:07.618931] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:30.035 00:09:30.035 real 0m0.948s 00:09:30.035 user 0m0.703s 00:09:30.035 sys 0m0.138s 00:09:30.035 15:05:08 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:09:30.035 15:05:08 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.035 15:05:08 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:30.035 ************************************ 00:09:30.035 END TEST bdev_json_nonenclosed 00:09:30.035 ************************************ 00:09:30.035 15:05:08 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:09:30.035 15:05:08 blockdev_nvme -- bdev/blockdev.sh@781 -- # true 00:09:30.035 15:05:08 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:30.035 15:05:08 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:30.035 15:05:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.035 15:05:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:30.294 ************************************ 00:09:30.294 START TEST bdev_json_nonarray 00:09:30.294 ************************************ 00:09:30.294 15:05:08 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:30.294 [2024-07-15 15:05:08.232104] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:09:30.294 [2024-07-15 15:05:08.232217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67804 ] 00:09:30.294 [2024-07-15 15:05:08.393743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.558 [2024-07-15 15:05:08.632844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.558 [2024-07-15 15:05:08.632937] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:30.558 [2024-07-15 15:05:08.632953] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:30.558 [2024-07-15 15:05:08.632964] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:31.126 00:09:31.126 real 0m0.947s 00:09:31.126 user 0m0.714s 00:09:31.126 sys 0m0.126s 00:09:31.126 15:05:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:09:31.126 15:05:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.126 15:05:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:31.126 ************************************ 00:09:31.126 END TEST bdev_json_nonarray 00:09:31.126 ************************************ 00:09:31.126 15:05:09 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:09:31.126 15:05:09 blockdev_nvme -- bdev/blockdev.sh@784 -- # true 00:09:31.126 15:05:09 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:09:31.126 15:05:09 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:09:31.126 15:05:09 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:09:31.126 15:05:09 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:31.126 15:05:09 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:09:31.126 15:05:09 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:31.126 15:05:09 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:31.126 15:05:09 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:31.126 15:05:09 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:31.126 15:05:09 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:31.126 15:05:09 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:31.126 00:09:31.126 real 0m44.544s 00:09:31.126 user 1m6.236s 00:09:31.126 sys 0m6.319s 00:09:31.126 15:05:09 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.126 15:05:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.126 ************************************ 00:09:31.126 END TEST blockdev_nvme 00:09:31.126 ************************************ 00:09:31.126 15:05:09 -- common/autotest_common.sh@1142 -- # return 0 00:09:31.126 15:05:09 -- spdk/autotest.sh@213 -- # uname -s 00:09:31.126 15:05:09 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:09:31.126 15:05:09 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:31.126 15:05:09 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:31.126 15:05:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.126 15:05:09 -- common/autotest_common.sh@10 -- # set +x 00:09:31.126 ************************************ 00:09:31.126 START TEST blockdev_nvme_gpt 00:09:31.126 ************************************ 00:09:31.126 15:05:09 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:31.385 * Looking for test storage... 00:09:31.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67880 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:31.385 15:05:09 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 67880 00:09:31.385 15:05:09 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 67880 ']' 00:09:31.385 15:05:09 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.385 15:05:09 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.385 15:05:09 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.385 15:05:09 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.385 15:05:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:31.385 [2024-07-15 15:05:09.468355] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:09:31.385 [2024-07-15 15:05:09.468469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67880 ] 00:09:31.643 [2024-07-15 15:05:09.633650] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.902 [2024-07-15 15:05:09.861382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.836 15:05:10 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.836 15:05:10 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:09:32.836 15:05:10 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:09:32.836 15:05:10 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:09:32.836 15:05:10 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:33.401 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.660 Waiting for block devices as requested 00:09:33.660 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.660 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.919 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.919 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.184 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:39.184 15:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:09:39.184 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:09:39.185 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:39.185 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:39.185 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:39.185 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:09:39.185 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:09:39.185 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:39.185 15:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:39.185 15:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:09:39.185 15:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:09:39.185 15:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:09:39.185 15:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:39.185 15:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:09:39.185 15:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:09:39.185 15:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:09:39.185 15:05:17 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:09:39.185 BYT; 00:09:39.185 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:39.185 15:05:17 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:09:39.185 BYT; 00:09:39.185 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:39.185 15:05:17 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:09:39.185 15:05:17 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:09:39.185 15:05:17 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:09:39.185 15:05:17 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:39.185 15:05:17 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:39.185 15:05:17 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:39.185 15:05:17 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:39.185 15:05:17 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:39.185 15:05:17 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:39.185 15:05:17 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:39.185 15:05:17 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:39.185 15:05:17 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:09:40.121 The operation has completed successfully. 00:09:40.121 15:05:18 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:09:41.088 The operation has completed successfully. 00:09:41.088 15:05:19 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:42.027 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:42.595 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.595 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.595 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.595 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.595 15:05:20 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:09:42.595 15:05:20 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.595 15:05:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:42.854 [] 00:09:42.854 15:05:20 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.854 15:05:20 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:09:42.854 15:05:20 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:42.854 15:05:20 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:42.854 15:05:20 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:42.854 15:05:20 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:42.854 15:05:20 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.854 15:05:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:43.114 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.114 15:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:09:43.114 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.114 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:43.114 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.114 15:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:09:43.114 15:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:09:43.114 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.114 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:43.114 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.114 15:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:09:43.114 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.114 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:43.114 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.114 15:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:43.114 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.114 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:43.114 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.114 15:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:09:43.114 15:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:09:43.114 15:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:09:43.114 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.114 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:43.374 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.374 15:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:09:43.374 15:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:09:43.375 15:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d93f7354-c5c8-428e-bfb4-273190d89ecc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d93f7354-c5c8-428e-bfb4-273190d89ecc",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ab52d9a3-0f0f-45d9-b6c7-a26141f88c8b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ab52d9a3-0f0f-45d9-b6c7-a26141f88c8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "cbaa32f9-8544-4724-86a3-5b3991ff9478"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cbaa32f9-8544-4724-86a3-5b3991ff9478",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9c41769d-9b7c-4ecb-807e-c3c9ee34fe81"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9c41769d-9b7c-4ecb-807e-c3c9ee34fe81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "7b884f22-529a-4044-90f5-aa1083e46f50"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7b884f22-529a-4044-90f5-aa1083e46f50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:43.375 15:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:09:43.375 15:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:09:43.375 15:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:09:43.375 15:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 67880 00:09:43.375 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 67880 ']' 00:09:43.375 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 67880 00:09:43.375 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:09:43.375 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:43.375 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67880 00:09:43.375 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:43.375 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:43.375 killing process with pid 67880 00:09:43.375 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67880' 00:09:43.375 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 67880 00:09:43.375 15:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 67880 00:09:45.919 15:05:23 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:45.919 15:05:23 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:45.919 15:05:23 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:45.919 15:05:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.919 15:05:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:45.919 ************************************ 00:09:45.919 START TEST bdev_hello_world 00:09:45.919 ************************************ 00:09:45.919 15:05:23 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:45.919 [2024-07-15 15:05:23.850671] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:09:45.919 [2024-07-15 15:05:23.850786] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68523 ] 00:09:45.919 [2024-07-15 15:05:24.011671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.177 [2024-07-15 15:05:24.244929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.113 [2024-07-15 15:05:24.962237] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:47.113 [2024-07-15 15:05:24.962300] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:47.113 [2024-07-15 15:05:24.962324] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:47.113 [2024-07-15 15:05:24.965330] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:47.113 [2024-07-15 15:05:24.965921] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:47.113 [2024-07-15 15:05:24.965958] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:47.113 [2024-07-15 15:05:24.966138] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:47.113 00:09:47.113 [2024-07-15 15:05:24.966173] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:48.492 00:09:48.492 real 0m2.731s 00:09:48.492 user 0m2.377s 00:09:48.492 sys 0m0.243s 00:09:48.492 15:05:26 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.492 ************************************ 00:09:48.492 END TEST bdev_hello_world 00:09:48.492 ************************************ 00:09:48.492 15:05:26 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:48.492 15:05:26 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:48.492 15:05:26 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:09:48.492 15:05:26 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:48.492 15:05:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.492 15:05:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:48.492 ************************************ 00:09:48.492 START TEST bdev_bounds 00:09:48.492 ************************************ 00:09:48.492 15:05:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:09:48.492 15:05:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=68576 00:09:48.492 15:05:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:48.492 Process bdevio pid: 68576 00:09:48.492 15:05:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 68576' 00:09:48.492 15:05:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:48.492 15:05:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 68576 00:09:48.492 15:05:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 68576 ']' 00:09:48.492 15:05:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.492 15:05:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:48.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.492 15:05:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.492 15:05:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:48.492 15:05:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:48.751 [2024-07-15 15:05:26.641633] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:09:48.751 [2024-07-15 15:05:26.641780] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68576 ] 00:09:48.751 [2024-07-15 15:05:26.811105] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:49.319 [2024-07-15 15:05:27.125627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.319 [2024-07-15 15:05:27.125706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.319 [2024-07-15 15:05:27.125739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.887 15:05:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.887 15:05:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:09:49.887 15:05:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:50.146 I/O targets: 00:09:50.146 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:50.146 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:09:50.146 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:09:50.146 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:50.146 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:50.146 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:50.146 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:50.146 00:09:50.146 00:09:50.146 CUnit - A unit testing framework for C - Version 2.1-3 00:09:50.146 http://cunit.sourceforge.net/ 00:09:50.146 00:09:50.146 00:09:50.146 Suite: bdevio tests on: Nvme3n1 00:09:50.146 Test: blockdev write read block ...passed 00:09:50.146 Test: blockdev write zeroes read block ...passed 00:09:50.146 Test: blockdev write zeroes read no split ...passed 00:09:50.146 Test: blockdev write zeroes read split ...passed 00:09:50.146 Test: blockdev write zeroes read split partial ...passed 00:09:50.146 Test: blockdev reset ...[2024-07-15 15:05:28.202583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:50.146 [2024-07-15 15:05:28.207452] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:50.146 passed 00:09:50.146 Test: blockdev write read 8 blocks ...passed 00:09:50.146 Test: blockdev write read size > 128k ...passed 00:09:50.146 Test: blockdev write read invalid size ...passed 00:09:50.146 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:50.146 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:50.146 Test: blockdev write read max offset ...passed 00:09:50.146 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:50.146 Test: blockdev writev readv 8 blocks ...passed 00:09:50.146 Test: blockdev writev readv 30 x 1block ...passed 00:09:50.146 Test: blockdev writev readv block ...passed 00:09:50.146 Test: blockdev writev readv size > 128k ...passed 00:09:50.146 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:50.146 Test: blockdev comparev and writev ...[2024-07-15 15:05:28.217209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26ca06000 len:0x1000 00:09:50.146 [2024-07-15 15:05:28.217280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:50.146 passed 00:09:50.146 Test: blockdev nvme passthru rw ...passed 00:09:50.146 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:05:28.218305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:50.146 [2024-07-15 15:05:28.218340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:50.146 passed 00:09:50.146 Test: blockdev nvme admin passthru ...passed 00:09:50.146 Test: blockdev copy ...passed 00:09:50.146 Suite: bdevio tests on: Nvme2n3 00:09:50.147 Test: blockdev write read block ...passed 00:09:50.147 Test: blockdev write zeroes read block ...passed 00:09:50.147 Test: blockdev write zeroes read no split ...passed 00:09:50.406 Test: blockdev write zeroes read split ...passed 00:09:50.406 Test: blockdev write zeroes read split partial ...passed 00:09:50.406 Test: blockdev reset ...[2024-07-15 15:05:28.304748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:50.406 [2024-07-15 15:05:28.309644] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:50.406 passed 00:09:50.406 Test: blockdev write read 8 blocks ...passed 00:09:50.406 Test: blockdev write read size > 128k ...passed 00:09:50.406 Test: blockdev write read invalid size ...passed 00:09:50.406 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:50.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:50.406 Test: blockdev write read max offset ...passed 00:09:50.406 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:50.406 Test: blockdev writev readv 8 blocks ...passed 00:09:50.406 Test: blockdev writev readv 30 x 1block ...passed 00:09:50.406 Test: blockdev writev readv block ...passed 00:09:50.406 Test: blockdev writev readv size > 128k ...passed 00:09:50.406 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:50.406 Test: blockdev comparev and writev ...[2024-07-15 15:05:28.319494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27b83c000 len:0x1000 00:09:50.406 [2024-07-15 15:05:28.319551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:50.406 passed 00:09:50.406 Test: blockdev nvme passthru rw ...passed 00:09:50.406 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:05:28.320561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:50.406 [2024-07-15 15:05:28.320591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:50.406 passed 00:09:50.406 Test: blockdev nvme admin passthru ...passed 00:09:50.406 Test: blockdev copy ...passed 00:09:50.406 Suite: bdevio tests on: Nvme2n2 00:09:50.406 Test: blockdev write read block ...passed 00:09:50.406 Test: blockdev write zeroes read block ...passed 00:09:50.406 Test: blockdev write zeroes read no split ...passed 00:09:50.406 Test: blockdev write zeroes read split ...passed 00:09:50.406 Test: blockdev write zeroes read split partial ...passed 00:09:50.406 Test: blockdev reset ...[2024-07-15 15:05:28.407767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:50.406 [2024-07-15 15:05:28.412689] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:50.406 passed 00:09:50.406 Test: blockdev write read 8 blocks ...passed 00:09:50.406 Test: blockdev write read size > 128k ...passed 00:09:50.406 Test: blockdev write read invalid size ...passed 00:09:50.406 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:50.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:50.406 Test: blockdev write read max offset ...passed 00:09:50.406 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:50.406 Test: blockdev writev readv 8 blocks ...passed 00:09:50.406 Test: blockdev writev readv 30 x 1block ...passed 00:09:50.406 Test: blockdev writev readv block ...passed 00:09:50.406 Test: blockdev writev readv size > 128k ...passed 00:09:50.406 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:50.406 Test: blockdev comparev and writev ...[2024-07-15 15:05:28.422192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27b836000 len:0x1000 00:09:50.406 [2024-07-15 15:05:28.422258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:50.406 passed 00:09:50.406 Test: blockdev nvme passthru rw ...passed 00:09:50.406 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:05:28.423146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:50.406 [2024-07-15 15:05:28.423184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:50.406 passed 00:09:50.406 Test: blockdev nvme admin passthru ...passed 00:09:50.406 Test: blockdev copy ...passed 00:09:50.406 Suite: bdevio tests on: Nvme2n1 00:09:50.406 Test: blockdev write read block ...passed 00:09:50.406 Test: blockdev write zeroes read block ...passed 00:09:50.406 Test: blockdev write zeroes read no split ...passed 00:09:50.406 Test: blockdev write zeroes read split ...passed 00:09:50.406 Test: blockdev write zeroes read split partial ...passed 00:09:50.406 Test: blockdev reset ...[2024-07-15 15:05:28.513366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:50.666 [2024-07-15 15:05:28.518355] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:50.666 passed 00:09:50.666 Test: blockdev write read 8 blocks ...passed 00:09:50.666 Test: blockdev write read size > 128k ...passed 00:09:50.666 Test: blockdev write read invalid size ...passed 00:09:50.666 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:50.666 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:50.666 Test: blockdev write read max offset ...passed 00:09:50.666 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:50.666 Test: blockdev writev readv 8 blocks ...passed 00:09:50.666 Test: blockdev writev readv 30 x 1block ...passed 00:09:50.666 Test: blockdev writev readv block ...passed 00:09:50.666 Test: blockdev writev readv size > 128k ...passed 00:09:50.666 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:50.666 Test: blockdev comparev and writev ...[2024-07-15 15:05:28.528355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27b832000 len:0x1000 00:09:50.666 [2024-07-15 15:05:28.528432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:50.666 passed 00:09:50.667 Test: blockdev nvme passthru rw ...passed 00:09:50.667 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:05:28.529387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:50.667 [2024-07-15 15:05:28.529419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:50.667 passed 00:09:50.667 Test: blockdev nvme admin passthru ...passed 00:09:50.667 Test: blockdev copy ...passed 00:09:50.667 Suite: bdevio tests on: Nvme1n1p2 00:09:50.667 Test: blockdev write read block ...passed 00:09:50.667 Test: blockdev write zeroes read block ...passed 00:09:50.667 Test: blockdev write zeroes read no split ...passed 00:09:50.667 Test: blockdev write zeroes read split ...passed 00:09:50.667 Test: blockdev write zeroes read split partial ...passed 00:09:50.667 Test: blockdev reset ...[2024-07-15 15:05:28.632906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:50.667 [2024-07-15 15:05:28.637433] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:50.667 passed 00:09:50.667 Test: blockdev write read 8 blocks ...passed 00:09:50.667 Test: blockdev write read size > 128k ...passed 00:09:50.667 Test: blockdev write read invalid size ...passed 00:09:50.667 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:50.667 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:50.667 Test: blockdev write read max offset ...passed 00:09:50.667 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:50.667 Test: blockdev writev readv 8 blocks ...passed 00:09:50.667 Test: blockdev writev readv 30 x 1block ...passed 00:09:50.667 Test: blockdev writev readv block ...passed 00:09:50.667 Test: blockdev writev readv size > 128k ...passed 00:09:50.667 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:50.667 Test: blockdev comparev and writev ...[2024-07-15 15:05:28.647811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x27b82e000 len:0x1000 00:09:50.667 [2024-07-15 15:05:28.647888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:50.667 passed 00:09:50.667 Test: blockdev nvme passthru rw ...passed 00:09:50.667 Test: blockdev nvme passthru vendor specific ...passed 00:09:50.667 Test: blockdev nvme admin passthru ...passed 00:09:50.667 Test: blockdev copy ...passed 00:09:50.667 Suite: bdevio tests on: Nvme1n1p1 00:09:50.667 Test: blockdev write read block ...passed 00:09:50.667 Test: blockdev write zeroes read block ...passed 00:09:50.667 Test: blockdev write zeroes read no split ...passed 00:09:50.667 Test: blockdev write zeroes read split ...passed 00:09:50.667 Test: blockdev write zeroes read split partial ...passed 00:09:50.667 Test: blockdev reset ...[2024-07-15 15:05:28.743998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:50.667 [2024-07-15 15:05:28.748546] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:50.667 passed 00:09:50.667 Test: blockdev write read 8 blocks ...passed 00:09:50.667 Test: blockdev write read size > 128k ...passed 00:09:50.667 Test: blockdev write read invalid size ...passed 00:09:50.667 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:50.667 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:50.667 Test: blockdev write read max offset ...passed 00:09:50.667 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:50.667 Test: blockdev writev readv 8 blocks ...passed 00:09:50.667 Test: blockdev writev readv 30 x 1block ...passed 00:09:50.667 Test: blockdev writev readv block ...passed 00:09:50.667 Test: blockdev writev readv size > 128k ...passed 00:09:50.667 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:50.667 Test: blockdev comparev and writev ...[2024-07-15 15:05:28.759322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x27360e000 len:0x1000 00:09:50.667 [2024-07-15 15:05:28.759402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:50.667 passed 00:09:50.667 Test: blockdev nvme passthru rw ...passed 00:09:50.667 Test: blockdev nvme passthru vendor specific ...passed 00:09:50.667 Test: blockdev nvme admin passthru ...passed 00:09:50.667 Test: blockdev copy ...passed 00:09:50.667 Suite: bdevio tests on: Nvme0n1 00:09:50.667 Test: blockdev write read block ...passed 00:09:50.667 Test: blockdev write zeroes read block ...passed 00:09:50.667 Test: blockdev write zeroes read no split ...passed 00:09:50.927 Test: blockdev write zeroes read split ...passed 00:09:50.927 Test: blockdev write zeroes read split partial ...passed 00:09:50.927 Test: blockdev reset ...[2024-07-15 15:05:28.854998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:50.927 passed 00:09:50.927 Test: blockdev write read 8 blocks ...[2024-07-15 15:05:28.859390] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:50.927 passed 00:09:50.927 Test: blockdev write read size > 128k ...passed 00:09:50.927 Test: blockdev write read invalid size ...passed 00:09:50.927 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:50.927 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:50.927 Test: blockdev write read max offset ...passed 00:09:50.927 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:50.927 Test: blockdev writev readv 8 blocks ...passed 00:09:50.927 Test: blockdev writev readv 30 x 1block ...passed 00:09:50.927 Test: blockdev writev readv block ...passed 00:09:50.927 Test: blockdev writev readv size > 128k ...passed 00:09:50.927 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:50.927 Test: blockdev comparev and writev ...[2024-07-15 15:05:28.868427] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:50.927 separate metadata which is not supported yet. 00:09:50.927 passed 00:09:50.927 Test: blockdev nvme passthru rw ...passed 00:09:50.927 Test: blockdev nvme passthru vendor specific ...passed 00:09:50.928 Test: blockdev nvme admin passthru ...[2024-07-15 15:05:28.869112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:50.928 [2024-07-15 15:05:28.869192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:50.928 passed 00:09:50.928 Test: blockdev copy ...passed 00:09:50.928 00:09:50.928 Run Summary: Type Total Ran Passed Failed Inactive 00:09:50.928 suites 7 7 n/a 0 0 00:09:50.928 tests 161 161 161 0 0 00:09:50.928 asserts 1025 1025 1025 0 n/a 00:09:50.928 00:09:50.928 Elapsed time = 2.120 seconds 00:09:50.928 0 00:09:50.928 15:05:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 68576 00:09:50.928 15:05:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 68576 ']' 00:09:50.928 15:05:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 68576 00:09:50.928 15:05:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:09:50.928 15:05:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:50.928 15:05:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68576 00:09:50.928 15:05:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:50.928 15:05:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:50.928 15:05:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68576' 00:09:50.928 killing process with pid 68576 00:09:50.928 15:05:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 68576 00:09:50.928 15:05:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 68576 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:52.338 00:09:52.338 real 0m3.787s 00:09:52.338 user 0m9.183s 00:09:52.338 sys 0m0.546s 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:52.338 ************************************ 00:09:52.338 END TEST bdev_bounds 00:09:52.338 ************************************ 00:09:52.338 15:05:30 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:52.338 15:05:30 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:52.338 15:05:30 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:52.338 15:05:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.338 15:05:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:52.338 ************************************ 00:09:52.338 START TEST bdev_nbd 00:09:52.338 ************************************ 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=68647 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 68647 /var/tmp/spdk-nbd.sock 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 68647 ']' 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:52.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:52.338 15:05:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:52.597 [2024-07-15 15:05:30.503158] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:09:52.597 [2024-07-15 15:05:30.503273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.597 [2024-07-15 15:05:30.669793] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.165 [2024-07-15 15:05:30.980798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.734 15:05:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.734 15:05:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:09:53.734 15:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:53.734 15:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.734 15:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:53.734 15:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:53.734 15:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:53.734 15:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.734 15:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:53.734 15:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:53.734 15:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:53.734 15:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:53.734 15:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:53.734 15:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:53.734 15:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:53.994 1+0 records in 00:09:53.994 1+0 records out 00:09:53.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525896 s, 7.8 MB/s 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:53.994 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:54.254 1+0 records in 00:09:54.254 1+0 records out 00:09:54.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000848281 s, 4.8 MB/s 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:54.254 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:54.514 1+0 records in 00:09:54.514 1+0 records out 00:09:54.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000791087 s, 5.2 MB/s 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:54.514 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:54.772 1+0 records in 00:09:54.772 1+0 records out 00:09:54.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626013 s, 6.5 MB/s 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:54.772 15:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:55.030 1+0 records in 00:09:55.030 1+0 records out 00:09:55.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000945838 s, 4.3 MB/s 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:55.030 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:55.031 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:55.031 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:55.031 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:55.297 1+0 records in 00:09:55.297 1+0 records out 00:09:55.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653446 s, 6.3 MB/s 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:55.297 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:55.559 1+0 records in 00:09:55.559 1+0 records out 00:09:55.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741058 s, 5.5 MB/s 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:55.559 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:55.819 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:55.819 { 00:09:55.819 "nbd_device": "/dev/nbd0", 00:09:55.819 "bdev_name": "Nvme0n1" 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "nbd_device": "/dev/nbd1", 00:09:55.819 "bdev_name": "Nvme1n1p1" 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "nbd_device": "/dev/nbd2", 00:09:55.819 "bdev_name": "Nvme1n1p2" 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "nbd_device": "/dev/nbd3", 00:09:55.819 "bdev_name": "Nvme2n1" 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "nbd_device": "/dev/nbd4", 00:09:55.819 "bdev_name": "Nvme2n2" 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "nbd_device": "/dev/nbd5", 00:09:55.819 "bdev_name": "Nvme2n3" 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "nbd_device": "/dev/nbd6", 00:09:55.819 "bdev_name": "Nvme3n1" 00:09:55.819 } 00:09:55.819 ]' 00:09:55.819 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:55.819 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:55.819 { 00:09:55.819 "nbd_device": "/dev/nbd0", 00:09:55.819 "bdev_name": "Nvme0n1" 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "nbd_device": "/dev/nbd1", 00:09:55.819 "bdev_name": "Nvme1n1p1" 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "nbd_device": "/dev/nbd2", 00:09:55.819 "bdev_name": "Nvme1n1p2" 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "nbd_device": "/dev/nbd3", 00:09:55.819 "bdev_name": "Nvme2n1" 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "nbd_device": "/dev/nbd4", 00:09:55.819 "bdev_name": "Nvme2n2" 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "nbd_device": "/dev/nbd5", 00:09:55.819 "bdev_name": "Nvme2n3" 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "nbd_device": "/dev/nbd6", 00:09:55.819 "bdev_name": "Nvme3n1" 00:09:55.819 } 00:09:55.819 ]' 00:09:55.819 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:55.819 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:55.819 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.819 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:55.819 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:55.819 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:55.819 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.820 15:05:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:56.079 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:56.079 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:56.079 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:56.079 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.079 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.079 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:56.079 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:56.079 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.079 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.079 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:56.339 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:56.339 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:56.339 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:56.339 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.339 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.339 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:56.339 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:56.340 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.340 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.340 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:56.340 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:56.340 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:56.340 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:56.340 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.340 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.340 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:56.340 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:56.340 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.340 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.340 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:56.599 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:56.599 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:56.599 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:56.599 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.599 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.599 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:56.599 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:56.599 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.599 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.599 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:56.857 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:56.857 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:56.857 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:56.857 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.857 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.857 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:56.857 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:56.857 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.857 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.857 15:05:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:57.115 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:57.115 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:57.115 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:57.115 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.115 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.115 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:57.115 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:57.115 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.115 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:57.115 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:57.374 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:57.374 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:57.374 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:57.374 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.374 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.374 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:57.374 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:57.374 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.374 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:57.374 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.374 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:57.374 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:57.374 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:57.374 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:57.632 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:57.633 /dev/nbd0 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:57.633 1+0 records in 00:09:57.633 1+0 records out 00:09:57.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054041 s, 7.6 MB/s 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:57.633 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:57.891 /dev/nbd1 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:57.891 1+0 records in 00:09:57.891 1+0 records out 00:09:57.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000813698 s, 5.0 MB/s 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:57.891 15:05:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:58.149 /dev/nbd10 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:58.149 1+0 records in 00:09:58.149 1+0 records out 00:09:58.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538144 s, 7.6 MB/s 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:58.149 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:58.407 /dev/nbd11 00:09:58.407 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:58.407 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:58.407 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:58.407 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:58.407 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:58.407 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:58.407 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:58.407 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:58.407 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:58.407 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:58.407 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:58.407 1+0 records in 00:09:58.407 1+0 records out 00:09:58.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761888 s, 5.4 MB/s 00:09:58.407 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:58.407 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:58.407 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:58.408 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:58.408 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:58.408 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:58.408 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:58.408 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:58.666 /dev/nbd12 00:09:58.666 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:58.666 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:58.666 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:58.667 1+0 records in 00:09:58.667 1+0 records out 00:09:58.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811297 s, 5.0 MB/s 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:58.667 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:58.925 /dev/nbd13 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:58.926 1+0 records in 00:09:58.926 1+0 records out 00:09:58.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000958192 s, 4.3 MB/s 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:58.926 15:05:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:59.185 /dev/nbd14 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:59.185 1+0 records in 00:09:59.185 1+0 records out 00:09:59.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539487 s, 7.6 MB/s 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.185 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:59.444 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:59.444 { 00:09:59.444 "nbd_device": "/dev/nbd0", 00:09:59.444 "bdev_name": "Nvme0n1" 00:09:59.444 }, 00:09:59.444 { 00:09:59.444 "nbd_device": "/dev/nbd1", 00:09:59.444 "bdev_name": "Nvme1n1p1" 00:09:59.444 }, 00:09:59.444 { 00:09:59.444 "nbd_device": "/dev/nbd10", 00:09:59.444 "bdev_name": "Nvme1n1p2" 00:09:59.444 }, 00:09:59.444 { 00:09:59.444 "nbd_device": "/dev/nbd11", 00:09:59.444 "bdev_name": "Nvme2n1" 00:09:59.444 }, 00:09:59.444 { 00:09:59.444 "nbd_device": "/dev/nbd12", 00:09:59.444 "bdev_name": "Nvme2n2" 00:09:59.444 }, 00:09:59.444 { 00:09:59.445 "nbd_device": "/dev/nbd13", 00:09:59.445 "bdev_name": "Nvme2n3" 00:09:59.445 }, 00:09:59.445 { 00:09:59.445 "nbd_device": "/dev/nbd14", 00:09:59.445 "bdev_name": "Nvme3n1" 00:09:59.445 } 00:09:59.445 ]' 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:59.445 { 00:09:59.445 "nbd_device": "/dev/nbd0", 00:09:59.445 "bdev_name": "Nvme0n1" 00:09:59.445 }, 00:09:59.445 { 00:09:59.445 "nbd_device": "/dev/nbd1", 00:09:59.445 "bdev_name": "Nvme1n1p1" 00:09:59.445 }, 00:09:59.445 { 00:09:59.445 "nbd_device": "/dev/nbd10", 00:09:59.445 "bdev_name": "Nvme1n1p2" 00:09:59.445 }, 00:09:59.445 { 00:09:59.445 "nbd_device": "/dev/nbd11", 00:09:59.445 "bdev_name": "Nvme2n1" 00:09:59.445 }, 00:09:59.445 { 00:09:59.445 "nbd_device": "/dev/nbd12", 00:09:59.445 "bdev_name": "Nvme2n2" 00:09:59.445 }, 00:09:59.445 { 00:09:59.445 "nbd_device": "/dev/nbd13", 00:09:59.445 "bdev_name": "Nvme2n3" 00:09:59.445 }, 00:09:59.445 { 00:09:59.445 "nbd_device": "/dev/nbd14", 00:09:59.445 "bdev_name": "Nvme3n1" 00:09:59.445 } 00:09:59.445 ]' 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:59.445 /dev/nbd1 00:09:59.445 /dev/nbd10 00:09:59.445 /dev/nbd11 00:09:59.445 /dev/nbd12 00:09:59.445 /dev/nbd13 00:09:59.445 /dev/nbd14' 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:59.445 /dev/nbd1 00:09:59.445 /dev/nbd10 00:09:59.445 /dev/nbd11 00:09:59.445 /dev/nbd12 00:09:59.445 /dev/nbd13 00:09:59.445 /dev/nbd14' 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:59.445 256+0 records in 00:09:59.445 256+0 records out 00:09:59.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122746 s, 85.4 MB/s 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:59.445 256+0 records in 00:09:59.445 256+0 records out 00:09:59.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0981274 s, 10.7 MB/s 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:59.445 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:59.704 256+0 records in 00:09:59.704 256+0 records out 00:09:59.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0886577 s, 11.8 MB/s 00:09:59.705 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:59.705 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:59.705 256+0 records in 00:09:59.705 256+0 records out 00:09:59.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0980051 s, 10.7 MB/s 00:09:59.705 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:59.705 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:59.964 256+0 records in 00:09:59.964 256+0 records out 00:09:59.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.104847 s, 10.0 MB/s 00:09:59.964 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:59.964 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:59.964 256+0 records in 00:09:59.964 256+0 records out 00:09:59.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103942 s, 10.1 MB/s 00:09:59.964 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:59.964 15:05:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:00.224 256+0 records in 00:10:00.224 256+0 records out 00:10:00.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103006 s, 10.2 MB/s 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:00.224 256+0 records in 00:10:00.224 256+0 records out 00:10:00.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0994977 s, 10.5 MB/s 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:00.224 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:00.225 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:00.225 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:00.225 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:00.225 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:00.225 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:00.225 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:00.225 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:00.225 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:00.225 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:00.225 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.225 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:00.225 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:00.225 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:00.225 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:00.225 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:00.484 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:00.484 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:00.484 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:00.484 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:00.484 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:00.484 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:00.484 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:00.484 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:00.484 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:00.484 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:00.744 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:00.745 15:05:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:01.004 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:01.004 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:01.004 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:01.004 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:01.004 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:01.004 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:01.004 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:01.004 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:01.004 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:01.004 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:01.264 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:01.264 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:01.264 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:01.264 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:01.264 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:01.264 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:01.264 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:01.264 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:01.264 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:01.264 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:01.524 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:01.524 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:01.524 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:01.524 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:01.524 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:01.524 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:01.524 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:01.524 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:01.524 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:01.524 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:01.783 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:01.783 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:01.783 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:01.783 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:01.783 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:01.784 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:01.784 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:01.784 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:01.784 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:01.784 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:01.784 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:01.784 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:01.784 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:01.784 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:02.043 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:02.043 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:02.043 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:02.043 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:02.043 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:02.043 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:02.043 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:02.043 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:02.043 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:02.043 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:02.043 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:02.043 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:02.043 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:10:02.043 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:10:02.043 15:05:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:02.043 malloc_lvol_verify 00:10:02.302 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:02.302 328567f5-1545-475d-bbc2-7b8a2c3f5a4f 00:10:02.302 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:02.561 be042fd2-2eab-4b91-9de4-f36d1b16f0dc 00:10:02.561 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:02.819 /dev/nbd0 00:10:02.819 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:10:02.819 mke2fs 1.46.5 (30-Dec-2021) 00:10:02.819 Discarding device blocks: 0/4096 done 00:10:02.819 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:02.819 00:10:02.819 Allocating group tables: 0/1 done 00:10:02.819 Writing inode tables: 0/1 done 00:10:02.819 Creating journal (1024 blocks): done 00:10:02.819 Writing superblocks and filesystem accounting information: 0/1 done 00:10:02.819 00:10:02.819 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:10:02.819 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:02.819 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:02.819 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:02.819 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:02.819 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:02.819 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:02.819 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 68647 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 68647 ']' 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 68647 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68647 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68647' 00:10:03.078 killing process with pid 68647 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 68647 00:10:03.078 15:05:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 68647 00:10:04.455 15:05:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:04.455 00:10:04.455 real 0m12.142s 00:10:04.455 user 0m16.147s 00:10:04.455 sys 0m4.241s 00:10:04.455 ************************************ 00:10:04.455 END TEST bdev_nbd 00:10:04.455 ************************************ 00:10:04.455 15:05:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.455 15:05:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:04.713 15:05:42 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:04.713 15:05:42 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:10:04.713 15:05:42 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:10:04.713 15:05:42 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:10:04.713 skipping fio tests on NVMe due to multi-ns failures. 00:10:04.713 15:05:42 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:04.713 15:05:42 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:04.713 15:05:42 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:04.713 15:05:42 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:10:04.713 15:05:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.713 15:05:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:04.713 ************************************ 00:10:04.713 START TEST bdev_verify 00:10:04.713 ************************************ 00:10:04.713 15:05:42 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:04.713 [2024-07-15 15:05:42.698066] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:10:04.713 [2024-07-15 15:05:42.698218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69071 ] 00:10:04.970 [2024-07-15 15:05:42.868890] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:05.227 [2024-07-15 15:05:43.158421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.227 [2024-07-15 15:05:43.158467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.159 Running I/O for 5 seconds... 00:10:11.439 00:10:11.439 Latency(us) 00:10:11.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.439 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:11.439 Verification LBA range: start 0x0 length 0xbd0bd 00:10:11.439 Nvme0n1 : 5.05 1393.36 5.44 0.00 0.00 91647.71 22436.78 70057.70 00:10:11.439 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:11.439 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:11.439 Nvme0n1 : 5.06 960.83 3.75 0.00 0.00 132831.89 32281.49 112641.79 00:10:11.439 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:11.439 Verification LBA range: start 0x0 length 0x4ff80 00:10:11.439 Nvme1n1p1 : 5.05 1392.84 5.44 0.00 0.00 91555.93 24726.25 68684.02 00:10:11.439 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:11.439 Verification LBA range: start 0x4ff80 length 0x4ff80 00:10:11.439 Nvme1n1p1 : 5.06 960.38 3.75 0.00 0.00 132578.32 29190.71 108520.75 00:10:11.439 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:11.439 Verification LBA range: start 0x0 length 0x4ff7f 00:10:11.439 Nvme1n1p2 : 5.06 1392.32 5.44 0.00 0.00 91459.96 24382.83 67768.23 00:10:11.439 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:11.439 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:10:11.439 Nvme1n1p2 : 5.07 960.07 3.75 0.00 0.00 132295.61 26557.82 112183.90 00:10:11.439 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:11.439 Verification LBA range: start 0x0 length 0x80000 00:10:11.439 Nvme2n1 : 5.06 1391.86 5.44 0.00 0.00 91362.89 23924.93 65020.87 00:10:11.439 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:11.439 Verification LBA range: start 0x80000 length 0x80000 00:10:11.439 Nvme2n1 : 5.07 959.74 3.75 0.00 0.00 132002.00 25069.67 116304.94 00:10:11.439 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:11.439 Verification LBA range: start 0x0 length 0x80000 00:10:11.439 Nvme2n2 : 5.06 1391.44 5.44 0.00 0.00 91264.79 22093.36 67310.34 00:10:11.439 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:11.439 Verification LBA range: start 0x80000 length 0x80000 00:10:11.439 Nvme2n2 : 5.08 969.93 3.79 0.00 0.00 130490.25 4922.35 119052.30 00:10:11.439 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:11.439 Verification LBA range: start 0x0 length 0x80000 00:10:11.439 Nvme2n3 : 5.06 1390.98 5.43 0.00 0.00 91172.45 17171.00 68684.02 00:10:11.439 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:11.439 Verification LBA range: start 0x80000 length 0x80000 00:10:11.439 Nvme2n3 : 5.08 969.70 3.79 0.00 0.00 130346.89 4922.35 119968.08 00:10:11.439 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:11.439 Verification LBA range: start 0x0 length 0x20000 00:10:11.439 Nvme3n1 : 5.07 1401.64 5.48 0.00 0.00 90431.60 2775.98 70057.70 00:10:11.439 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:11.439 Verification LBA range: start 0x20000 length 0x20000 00:10:11.439 Nvme3n1 : 5.10 979.53 3.83 0.00 0.00 128970.42 8528.27 119052.30 00:10:11.439 =================================================================================================================== 00:10:11.439 Total : 16514.62 64.51 0.00 0.00 107705.33 2775.98 119968.08 00:10:13.347 00:10:13.347 real 0m8.405s 00:10:13.347 user 0m15.124s 00:10:13.347 sys 0m0.400s 00:10:13.347 15:05:51 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:13.347 15:05:51 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:13.347 ************************************ 00:10:13.347 END TEST bdev_verify 00:10:13.347 ************************************ 00:10:13.347 15:05:51 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:13.347 15:05:51 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:13.347 15:05:51 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:10:13.347 15:05:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.347 15:05:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:13.347 ************************************ 00:10:13.347 START TEST bdev_verify_big_io 00:10:13.347 ************************************ 00:10:13.347 15:05:51 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:13.347 [2024-07-15 15:05:51.162777] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:10:13.347 [2024-07-15 15:05:51.162902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69181 ] 00:10:13.347 [2024-07-15 15:05:51.347864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:13.606 [2024-07-15 15:05:51.639436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.606 [2024-07-15 15:05:51.639450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.987 Running I/O for 5 seconds... 00:10:21.614 00:10:21.614 Latency(us) 00:10:21.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.614 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:21.614 Verification LBA range: start 0x0 length 0xbd0b 00:10:21.614 Nvme0n1 : 5.59 180.94 11.31 0.00 0.00 680052.50 21406.52 681345.45 00:10:21.614 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:21.614 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:21.614 Nvme0n1 : 5.69 86.53 5.41 0.00 0.00 1404909.94 17285.48 1604458.65 00:10:21.614 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:21.614 Verification LBA range: start 0x0 length 0x4ff8 00:10:21.614 Nvme1n1p1 : 5.59 183.42 11.46 0.00 0.00 660634.37 63647.19 688671.75 00:10:21.614 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:21.614 Verification LBA range: start 0x4ff8 length 0x4ff8 00:10:21.614 Nvme1n1p1 : 5.69 93.36 5.83 0.00 0.00 1244300.66 56320.89 1296754.25 00:10:21.614 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:21.614 Verification LBA range: start 0x0 length 0x4ff7 00:10:21.614 Nvme1n1p2 : 5.51 185.75 11.61 0.00 0.00 651157.58 106689.17 597093.06 00:10:21.614 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:21.614 Verification LBA range: start 0x4ff7 length 0x4ff7 00:10:21.614 Nvme1n1p2 : 5.85 104.69 6.54 0.00 0.00 1065926.57 47620.92 1164880.94 00:10:21.614 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:21.614 Verification LBA range: start 0x0 length 0x8000 00:10:21.614 Nvme2n1 : 5.59 187.10 11.69 0.00 0.00 633323.99 75552.42 655703.42 00:10:21.614 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:21.614 Verification LBA range: start 0x8000 length 0x8000 00:10:21.614 Nvme2n1 : 5.93 112.48 7.03 0.00 0.00 959853.82 44415.66 1648416.42 00:10:21.614 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:21.614 Verification LBA range: start 0x0 length 0x8000 00:10:21.614 Nvme2n2 : 5.60 194.38 12.15 0.00 0.00 606461.49 3505.75 644713.98 00:10:21.614 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:21.614 Verification LBA range: start 0x8000 length 0x8000 00:10:21.614 Nvme2n2 : 6.09 133.72 8.36 0.00 0.00 780292.42 19918.37 2403024.82 00:10:21.614 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:21.614 Verification LBA range: start 0x0 length 0x8000 00:10:21.614 Nvme2n3 : 5.61 201.64 12.60 0.00 0.00 576561.42 3949.33 655703.42 00:10:21.614 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:21.614 Verification LBA range: start 0x8000 length 0x8000 00:10:21.614 Nvme2n3 : 6.28 181.46 11.34 0.00 0.00 557426.11 12763.78 2153930.79 00:10:21.614 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:21.614 Verification LBA range: start 0x0 length 0x2000 00:10:21.614 Nvme3n1 : 5.61 201.42 12.59 0.00 0.00 566605.84 4264.13 670356.01 00:10:21.614 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:21.614 Verification LBA range: start 0x2000 length 0x2000 00:10:21.614 Nvme3n1 : 6.37 232.99 14.56 0.00 0.00 421791.81 436.43 2212541.15 00:10:21.614 =================================================================================================================== 00:10:21.614 Total : 2279.87 142.49 0.00 0.00 696241.65 436.43 2403024.82 00:10:23.515 00:10:23.515 real 0m10.365s 00:10:23.515 user 0m19.050s 00:10:23.515 sys 0m0.452s 00:10:23.515 15:06:01 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.515 15:06:01 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:23.515 ************************************ 00:10:23.515 END TEST bdev_verify_big_io 00:10:23.515 ************************************ 00:10:23.515 15:06:01 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:23.515 15:06:01 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:23.515 15:06:01 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:23.515 15:06:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.515 15:06:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:23.515 ************************************ 00:10:23.515 START TEST bdev_write_zeroes 00:10:23.515 ************************************ 00:10:23.515 15:06:01 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:23.515 [2024-07-15 15:06:01.593896] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:10:23.515 [2024-07-15 15:06:01.594032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69307 ] 00:10:23.774 [2024-07-15 15:06:01.757460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.032 [2024-07-15 15:06:01.992735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.047 Running I/O for 1 seconds... 00:10:25.993 00:10:25.993 Latency(us) 00:10:25.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.993 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:25.993 Nvme0n1 : 1.02 8763.62 34.23 0.00 0.00 14559.21 10588.79 33884.12 00:10:25.993 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:25.993 Nvme1n1p1 : 1.02 8752.71 34.19 0.00 0.00 14553.47 10817.73 34342.01 00:10:25.993 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:25.993 Nvme1n1p2 : 1.02 8741.89 34.15 0.00 0.00 14512.38 10874.97 32510.43 00:10:25.993 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:25.993 Nvme2n1 : 1.02 8769.77 34.26 0.00 0.00 14402.02 9157.87 27130.19 00:10:25.993 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:25.993 Nvme2n2 : 1.02 8809.21 34.41 0.00 0.00 14295.13 4893.74 27473.61 00:10:25.993 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:25.993 Nvme2n3 : 1.03 8800.92 34.38 0.00 0.00 14273.06 4922.35 27702.55 00:10:25.993 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:25.993 Nvme3n1 : 1.03 8790.86 34.34 0.00 0.00 14248.49 5294.39 27244.66 00:10:25.993 =================================================================================================================== 00:10:25.993 Total : 61428.98 239.96 0.00 0.00 14405.43 4893.74 34342.01 00:10:27.365 00:10:27.365 real 0m3.559s 00:10:27.365 user 0m3.196s 00:10:27.365 sys 0m0.248s 00:10:27.365 15:06:05 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:27.365 15:06:05 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:27.365 ************************************ 00:10:27.365 END TEST bdev_write_zeroes 00:10:27.365 ************************************ 00:10:27.365 15:06:05 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:27.365 15:06:05 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:27.365 15:06:05 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:27.365 15:06:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.365 15:06:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:27.365 ************************************ 00:10:27.365 START TEST bdev_json_nonenclosed 00:10:27.365 ************************************ 00:10:27.365 15:06:05 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:27.365 [2024-07-15 15:06:05.222176] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:10:27.365 [2024-07-15 15:06:05.222288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69365 ] 00:10:27.365 [2024-07-15 15:06:05.383966] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.622 [2024-07-15 15:06:05.611100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.622 [2024-07-15 15:06:05.611198] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:27.622 [2024-07-15 15:06:05.611214] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:27.622 [2024-07-15 15:06:05.611227] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:28.189 00:10:28.189 real 0m0.917s 00:10:28.189 user 0m0.691s 00:10:28.189 sys 0m0.121s 00:10:28.189 15:06:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:10:28.189 15:06:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.189 15:06:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:28.189 ************************************ 00:10:28.189 END TEST bdev_json_nonenclosed 00:10:28.189 ************************************ 00:10:28.189 15:06:06 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:10:28.189 15:06:06 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # true 00:10:28.189 15:06:06 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:28.189 15:06:06 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:28.189 15:06:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.189 15:06:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:28.189 ************************************ 00:10:28.189 START TEST bdev_json_nonarray 00:10:28.189 ************************************ 00:10:28.189 15:06:06 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:28.189 [2024-07-15 15:06:06.206742] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:10:28.189 [2024-07-15 15:06:06.206859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69391 ] 00:10:28.447 [2024-07-15 15:06:06.368546] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.706 [2024-07-15 15:06:06.595978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.706 [2024-07-15 15:06:06.596116] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:28.706 [2024-07-15 15:06:06.596132] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:28.706 [2024-07-15 15:06:06.596145] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:28.965 00:10:28.965 real 0m0.916s 00:10:28.965 user 0m0.677s 00:10:28.965 sys 0m0.134s 00:10:28.965 15:06:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:10:28.965 15:06:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.965 15:06:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:28.965 ************************************ 00:10:28.965 END TEST bdev_json_nonarray 00:10:28.965 ************************************ 00:10:29.225 15:06:07 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:10:29.225 15:06:07 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # true 00:10:29.225 15:06:07 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:10:29.225 15:06:07 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:10:29.225 15:06:07 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:29.225 15:06:07 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:29.225 15:06:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.225 15:06:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:29.225 ************************************ 00:10:29.225 START TEST bdev_gpt_uuid 00:10:29.225 ************************************ 00:10:29.225 15:06:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:10:29.225 15:06:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:10:29.225 15:06:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:10:29.225 15:06:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69422 00:10:29.225 15:06:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:29.225 15:06:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:29.225 15:06:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 69422 00:10:29.225 15:06:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 69422 ']' 00:10:29.225 15:06:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.225 15:06:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:29.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.225 15:06:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.225 15:06:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:29.225 15:06:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:29.225 [2024-07-15 15:06:07.208952] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:10:29.225 [2024-07-15 15:06:07.209090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69422 ] 00:10:29.484 [2024-07-15 15:06:07.356946] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.484 [2024-07-15 15:06:07.587918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.437 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:30.438 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:10:30.438 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:30.438 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.438 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:31.006 Some configs were skipped because the RPC state that can call them passed over. 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:10:31.007 { 00:10:31.007 "name": "Nvme1n1p1", 00:10:31.007 "aliases": [ 00:10:31.007 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:31.007 ], 00:10:31.007 "product_name": "GPT Disk", 00:10:31.007 "block_size": 4096, 00:10:31.007 "num_blocks": 655104, 00:10:31.007 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:31.007 "assigned_rate_limits": { 00:10:31.007 "rw_ios_per_sec": 0, 00:10:31.007 "rw_mbytes_per_sec": 0, 00:10:31.007 "r_mbytes_per_sec": 0, 00:10:31.007 "w_mbytes_per_sec": 0 00:10:31.007 }, 00:10:31.007 "claimed": false, 00:10:31.007 "zoned": false, 00:10:31.007 "supported_io_types": { 00:10:31.007 "read": true, 00:10:31.007 "write": true, 00:10:31.007 "unmap": true, 00:10:31.007 "flush": true, 00:10:31.007 "reset": true, 00:10:31.007 "nvme_admin": false, 00:10:31.007 "nvme_io": false, 00:10:31.007 "nvme_io_md": false, 00:10:31.007 "write_zeroes": true, 00:10:31.007 "zcopy": false, 00:10:31.007 "get_zone_info": false, 00:10:31.007 "zone_management": false, 00:10:31.007 "zone_append": false, 00:10:31.007 "compare": true, 00:10:31.007 "compare_and_write": false, 00:10:31.007 "abort": true, 00:10:31.007 "seek_hole": false, 00:10:31.007 "seek_data": false, 00:10:31.007 "copy": true, 00:10:31.007 "nvme_iov_md": false 00:10:31.007 }, 00:10:31.007 "driver_specific": { 00:10:31.007 "gpt": { 00:10:31.007 "base_bdev": "Nvme1n1", 00:10:31.007 "offset_blocks": 256, 00:10:31.007 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:31.007 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:31.007 "partition_name": "SPDK_TEST_first" 00:10:31.007 } 00:10:31.007 } 00:10:31.007 } 00:10:31.007 ]' 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.007 15:06:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:31.007 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.007 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:10:31.007 { 00:10:31.007 "name": "Nvme1n1p2", 00:10:31.007 "aliases": [ 00:10:31.007 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:31.007 ], 00:10:31.007 "product_name": "GPT Disk", 00:10:31.007 "block_size": 4096, 00:10:31.007 "num_blocks": 655103, 00:10:31.007 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:31.007 "assigned_rate_limits": { 00:10:31.007 "rw_ios_per_sec": 0, 00:10:31.007 "rw_mbytes_per_sec": 0, 00:10:31.007 "r_mbytes_per_sec": 0, 00:10:31.007 "w_mbytes_per_sec": 0 00:10:31.007 }, 00:10:31.007 "claimed": false, 00:10:31.007 "zoned": false, 00:10:31.007 "supported_io_types": { 00:10:31.007 "read": true, 00:10:31.007 "write": true, 00:10:31.007 "unmap": true, 00:10:31.007 "flush": true, 00:10:31.007 "reset": true, 00:10:31.007 "nvme_admin": false, 00:10:31.007 "nvme_io": false, 00:10:31.007 "nvme_io_md": false, 00:10:31.007 "write_zeroes": true, 00:10:31.007 "zcopy": false, 00:10:31.007 "get_zone_info": false, 00:10:31.007 "zone_management": false, 00:10:31.007 "zone_append": false, 00:10:31.007 "compare": true, 00:10:31.007 "compare_and_write": false, 00:10:31.007 "abort": true, 00:10:31.007 "seek_hole": false, 00:10:31.007 "seek_data": false, 00:10:31.007 "copy": true, 00:10:31.007 "nvme_iov_md": false 00:10:31.007 }, 00:10:31.007 "driver_specific": { 00:10:31.007 "gpt": { 00:10:31.007 "base_bdev": "Nvme1n1", 00:10:31.007 "offset_blocks": 655360, 00:10:31.007 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:31.007 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:31.007 "partition_name": "SPDK_TEST_second" 00:10:31.007 } 00:10:31.007 } 00:10:31.007 } 00:10:31.007 ]' 00:10:31.007 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:10:31.007 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:10:31.007 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:10:31.007 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:31.007 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:31.267 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:31.267 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 69422 00:10:31.267 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 69422 ']' 00:10:31.267 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 69422 00:10:31.267 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:10:31.267 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:31.267 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69422 00:10:31.267 killing process with pid 69422 00:10:31.267 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:31.267 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:31.267 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69422' 00:10:31.267 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 69422 00:10:31.267 15:06:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 69422 00:10:33.800 00:10:33.800 real 0m4.501s 00:10:33.800 user 0m4.615s 00:10:33.800 sys 0m0.462s 00:10:33.800 15:06:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.800 15:06:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:33.800 ************************************ 00:10:33.800 END TEST bdev_gpt_uuid 00:10:33.800 ************************************ 00:10:33.800 15:06:11 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:33.800 15:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:10:33.800 15:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:10:33.800 15:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:10:33.800 15:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:33.800 15:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:33.800 15:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:10:33.800 15:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:10:33.801 15:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:10:33.801 15:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:34.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:34.630 Waiting for block devices as requested 00:10:34.631 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.631 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.631 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.889 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:40.166 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:40.166 15:06:17 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:10:40.166 15:06:17 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:10:40.166 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:40.166 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:10:40.166 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:40.166 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:10:40.166 15:06:18 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:10:40.166 00:10:40.166 real 1m8.875s 00:10:40.166 user 1m27.013s 00:10:40.166 sys 0m10.916s 00:10:40.166 15:06:18 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.166 15:06:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:40.166 ************************************ 00:10:40.166 END TEST blockdev_nvme_gpt 00:10:40.166 ************************************ 00:10:40.166 15:06:18 -- common/autotest_common.sh@1142 -- # return 0 00:10:40.166 15:06:18 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:40.166 15:06:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:40.166 15:06:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.166 15:06:18 -- common/autotest_common.sh@10 -- # set +x 00:10:40.166 ************************************ 00:10:40.166 START TEST nvme 00:10:40.166 ************************************ 00:10:40.166 15:06:18 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:40.425 * Looking for test storage... 00:10:40.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:40.425 15:06:18 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:40.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:41.561 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.561 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.561 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.819 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.819 15:06:19 nvme -- nvme/nvme.sh@79 -- # uname 00:10:41.819 15:06:19 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:41.819 15:06:19 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:41.819 15:06:19 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:41.819 15:06:19 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:41.819 15:06:19 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:10:41.819 15:06:19 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:10:41.819 15:06:19 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:41.819 15:06:19 nvme -- common/autotest_common.sh@1069 -- # stubpid=70080 00:10:41.819 Waiting for stub to ready for secondary processes... 00:10:41.819 15:06:19 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:10:41.819 15:06:19 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:41.819 15:06:19 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/70080 ]] 00:10:41.819 15:06:19 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:10:41.819 [2024-07-15 15:06:19.837495] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:10:41.819 [2024-07-15 15:06:19.837603] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:42.756 [2024-07-15 15:06:20.792090] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:42.756 15:06:20 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:42.756 15:06:20 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/70080 ]] 00:10:42.756 15:06:20 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:10:43.017 [2024-07-15 15:06:21.039430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.017 [2024-07-15 15:06:21.039533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.017 [2024-07-15 15:06:21.039573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.017 [2024-07-15 15:06:21.074195] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:43.017 [2024-07-15 15:06:21.074288] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:43.017 [2024-07-15 15:06:21.090825] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:43.017 [2024-07-15 15:06:21.091015] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:43.017 [2024-07-15 15:06:21.099760] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:43.017 [2024-07-15 15:06:21.100412] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:43.017 [2024-07-15 15:06:21.100594] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:43.017 [2024-07-15 15:06:21.108501] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:43.017 [2024-07-15 15:06:21.108799] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:43.017 [2024-07-15 15:06:21.108909] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:43.017 [2024-07-15 15:06:21.115555] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:43.017 [2024-07-15 15:06:21.115829] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:43.017 [2024-07-15 15:06:21.115923] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:43.017 [2024-07-15 15:06:21.115985] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:43.017 [2024-07-15 15:06:21.116057] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:43.954 done. 00:10:43.954 15:06:21 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:43.954 15:06:21 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:10:43.954 15:06:21 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:43.954 15:06:21 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:10:43.954 15:06:21 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.954 15:06:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:43.954 ************************************ 00:10:43.954 START TEST nvme_reset 00:10:43.954 ************************************ 00:10:43.954 15:06:21 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:43.954 Initializing NVMe Controllers 00:10:43.954 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:43.954 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:43.954 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:43.954 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:43.954 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:44.212 00:10:44.212 real 0m0.243s 00:10:44.212 user 0m0.076s 00:10:44.212 sys 0m0.127s 00:10:44.212 15:06:22 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.212 15:06:22 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:44.212 ************************************ 00:10:44.212 END TEST nvme_reset 00:10:44.212 ************************************ 00:10:44.212 15:06:22 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:44.212 15:06:22 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:44.212 15:06:22 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:44.212 15:06:22 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.212 15:06:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:44.212 ************************************ 00:10:44.212 START TEST nvme_identify 00:10:44.212 ************************************ 00:10:44.212 15:06:22 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:10:44.212 15:06:22 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:44.212 15:06:22 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:44.212 15:06:22 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:44.212 15:06:22 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:44.212 15:06:22 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:44.212 15:06:22 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:10:44.212 15:06:22 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:44.212 15:06:22 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:44.212 15:06:22 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:44.212 15:06:22 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:44.212 15:06:22 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:44.212 15:06:22 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:44.472 ===================================================== 00:10:44.472 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:44.472 ===================================================== 00:10:44.472 Controller Capabilities/Features 00:10:44.472 ================================ 00:10:44.472 Vendor ID: 1b36 00:10:44.472 Subsystem Vendor ID: 1af4 00:10:44.472 Serial Number: 12340 00:10:44.472 Model Number: QEMU NVMe Ctrl 00:10:44.472 Firmware Version: 8.0.0 00:10:44.472 Recommended Arb Burst: 6 00:10:44.472 IEEE OUI Identifier: 00 54 52 00:10:44.472 Multi-path I/O 00:10:44.472 May have multiple subsystem ports: No 00:10:44.472 May have multiple controllers: No 00:10:44.472 Associated with SR-IOV VF: No 00:10:44.472 Max Data Transfer Size: 524288 00:10:44.472 Max Number of Namespaces: 256 00:10:44.472 Max Number of I/O Queues: 64 00:10:44.472 NVMe Specification Version (VS): 1.4 00:10:44.472 NVMe Specification Version (Identify): 1.4 00:10:44.472 Maximum Queue Entries: 2048 00:10:44.472 Contiguous Queues Required: Yes 00:10:44.472 Arbitration Mechanisms Supported 00:10:44.472 Weighted Round Robin: Not Supported 00:10:44.472 Vendor Specific: Not Supported 00:10:44.472 Reset Timeout: 7500 ms 00:10:44.472 Doorbell Stride: 4 bytes 00:10:44.472 NVM Subsystem Reset: Not Supported 00:10:44.472 Command Sets Supported 00:10:44.472 NVM Command Set: Supported 00:10:44.472 Boot Partition: Not Supported 00:10:44.472 Memory Page Size Minimum: 4096 bytes 00:10:44.472 Memory Page Size Maximum: 65536 bytes 00:10:44.472 Persistent Memory Region: Not Supported 00:10:44.472 Optional Asynchronous Events Supported 00:10:44.472 Namespace Attribute Notices: Supported 00:10:44.472 Firmware Activation Notices: Not Supported 00:10:44.472 ANA Change Notices: Not Supported 00:10:44.472 PLE Aggregate Log Change Notices: Not Supported 00:10:44.472 LBA Status Info Alert Notices: Not Supported 00:10:44.472 EGE Aggregate Log Change Notices: Not Supported 00:10:44.472 Normal NVM Subsystem Shutdown event: Not Supported 00:10:44.472 Zone Descriptor Change Notices: Not Supported 00:10:44.472 Discovery Log Change Notices: Not Supported 00:10:44.472 Controller Attributes 00:10:44.472 128-bit Host Identifier: Not Supported 00:10:44.472 Non-Operational Permissive Mode: Not Supported 00:10:44.472 NVM Sets: Not Supported 00:10:44.472 Read Recovery Levels: Not Supported 00:10:44.472 Endurance Groups: Not Supported 00:10:44.472 Predictable Latency Mode: Not Supported 00:10:44.472 Traffic Based Keep ALive: Not Supported 00:10:44.472 Namespace Granularity: Not Supported 00:10:44.472 SQ Associations: Not Supported 00:10:44.472 UUID List: Not Supported 00:10:44.472 Multi-Domain Subsystem: Not Supported 00:10:44.472 Fixed Capacity Management: Not Supported 00:10:44.472 Variable Capacity Management: Not Supported 00:10:44.472 Delete Endurance Group: Not Supported 00:10:44.472 Delete NVM Set: Not Supported 00:10:44.472 Extended LBA Formats Supported: Supported 00:10:44.472 Flexible Data Placement Supported: Not Supported 00:10:44.472 00:10:44.472 Controller Memory Buffer Support 00:10:44.472 ================================ 00:10:44.472 Supported: No 00:10:44.472 00:10:44.472 Persistent Memory Region Support 00:10:44.472 ================================ 00:10:44.472 Supported: No 00:10:44.472 00:10:44.472 Admin Command Set Attributes 00:10:44.472 ============================ 00:10:44.472 Security Send/Receive: Not Supported 00:10:44.472 Format NVM: Supported 00:10:44.472 Firmware Activate/Download: Not Supported 00:10:44.472 Namespace Management: Supported 00:10:44.472 Device Self-Test: Not Supported 00:10:44.472 Directives: Supported 00:10:44.473 NVMe-MI: Not Supported 00:10:44.473 Virtualization Management: Not Supported 00:10:44.473 Doorbell Buffer Config: Supported 00:10:44.473 Get LBA Status Capability: Not Supported 00:10:44.473 Command & Feature Lockdown Capability: Not Supported 00:10:44.473 Abort Command Limit: 4 00:10:44.473 Async Event Request Limit: 4 00:10:44.473 Number of Firmware Slots: N/A 00:10:44.473 Firmware Slot 1 Read-Only: N/A 00:10:44.473 Firmware Activation Without Reset: N/A 00:10:44.473 Multiple Update Detection Support: N/A 00:10:44.473 Firmware Update Granularity: No Information Provided 00:10:44.473 Per-Namespace SMART Log: Yes 00:10:44.473 Asymmetric Namespace Access Log Page: Not Supported 00:10:44.473 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:44.473 Command Effects Log Page: Supported 00:10:44.473 Get Log Page Extended Data: Supported 00:10:44.473 Telemetry Log Pages: Not Supported 00:10:44.473 Persistent Event Log Pages: Not Supported 00:10:44.473 Supported Log Pages Log Page: May Support 00:10:44.473 Commands Supported & Effects Log Page: Not Supported 00:10:44.473 Feature Identifiers & Effects Log Page:May Support 00:10:44.473 NVMe-MI Commands & Effects Log Page: May Support 00:10:44.473 Data Area 4 for Telemetry Log: Not Supported 00:10:44.473 Error Log Page Entries Supported: 1 00:10:44.473 Keep Alive: Not Supported 00:10:44.473 00:10:44.473 NVM Command Set Attributes 00:10:44.473 ========================== 00:10:44.473 Submission Queue Entry Size 00:10:44.473 Max: 64 00:10:44.473 Min: 64 00:10:44.473 Completion Queue Entry Size 00:10:44.473 Max: 16 00:10:44.473 Min: 16 00:10:44.473 Number of Namespaces: 256 00:10:44.473 Compare Command: Supported 00:10:44.473 Write Uncorrectable Command: Not Supported 00:10:44.473 Dataset Management Command: Supported 00:10:44.473 Write Zeroes Command: Supported 00:10:44.473 Set Features Save Field: Supported 00:10:44.473 Reservations: Not Supported 00:10:44.473 Timestamp: Supported 00:10:44.473 Copy: Supported 00:10:44.473 Volatile Write Cache: Present 00:10:44.473 Atomic Write Unit (Normal): 1 00:10:44.473 Atomic Write Unit (PFail): 1 00:10:44.473 Atomic Compare & Write Unit: 1 00:10:44.473 Fused Compare & Write: Not Supported 00:10:44.473 Scatter-Gather List 00:10:44.473 SGL Command Set: Supported 00:10:44.473 SGL Keyed: Not Supported 00:10:44.473 SGL Bit Bucket Descriptor: Not Supported 00:10:44.473 SGL Metadata Pointer: Not Supported 00:10:44.473 Oversized SGL: Not Supported 00:10:44.473 SGL Metadata Address: Not Supported 00:10:44.473 SGL Offset: Not Supported 00:10:44.473 Transport SGL Data Block: Not Supported 00:10:44.473 Replay Protected Memory Block: Not Supported 00:10:44.473 00:10:44.473 Firmware Slot Information 00:10:44.473 ========================= 00:10:44.473 Active slot: 1 00:10:44.473 Slot 1 Firmware Revision: 1.0 00:10:44.473 00:10:44.473 00:10:44.473 Commands Supported and Effects 00:10:44.473 ============================== 00:10:44.473 Admin Commands 00:10:44.473 -------------- 00:10:44.473 Delete I/O Submission Queue (00h): Supported 00:10:44.473 Create I/O Submission Queue (01h): Supported 00:10:44.473 Get Log Page (02h): Supported 00:10:44.473 Delete I/O Completion Queue (04h): Supported 00:10:44.473 Create I/O Completion Queue (05h): Supported 00:10:44.473 Identify (06h): Supported 00:10:44.473 Abort (08h): Supported 00:10:44.473 Set Features (09h): Supported 00:10:44.473 Get Features (0Ah): Supported 00:10:44.473 Asynchronous Event Request (0Ch): Supported 00:10:44.473 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:44.473 Directive Send (19h): Supported 00:10:44.473 Directive Receive (1Ah): Supported 00:10:44.473 Virtualization Management (1Ch): Supported 00:10:44.473 Doorbell Buffer Config (7Ch): Supported 00:10:44.473 Format NVM (80h): Supported LBA-Change 00:10:44.473 I/O Commands 00:10:44.473 ------------ 00:10:44.473 Flush (00h): Supported LBA-Change 00:10:44.473 Write (01h): Supported LBA-Change 00:10:44.473 Read (02h): Supported 00:10:44.473 Compare (05h): Supported 00:10:44.473 Write Zeroes (08h): Supported LBA-Change 00:10:44.473 Dataset Management (09h): Supported LBA-Change 00:10:44.473 Unknown (0Ch): Supported 00:10:44.473 Unknown (12h): Supported 00:10:44.473 Copy (19h): Supported LBA-Change 00:10:44.473 Unknown (1Dh): Supported LBA-Change 00:10:44.473 00:10:44.473 Error Log 00:10:44.473 ========= 00:10:44.473 00:10:44.473 Arbitration 00:10:44.473 =========== 00:10:44.473 Arbitration Burst: no limit 00:10:44.473 00:10:44.473 Power Management 00:10:44.473 ================ 00:10:44.473 Number of Power States: 1 00:10:44.473 Current Power State: Power State #0 00:10:44.473 Power State #0: 00:10:44.473 Max Power: 25.00 W 00:10:44.473 Non-Operational State: Operational 00:10:44.473 Entry Latency: 16 microseconds 00:10:44.473 Exit Latency: 4 microseconds 00:10:44.473 Relative Read Throughput: 0 00:10:44.473 Relative Read Latency: 0 00:10:44.473 Relative Write Throughput: 0 00:10:44.473 Relative Write Latency: 0 00:10:44.473 Idle Power[2024-07-15 15:06:22.446056] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 70116 terminated unexpected 00:10:44.473 [2024-07-15 15:06:22.446938] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 70116 terminated unexpected 00:10:44.473 : Not Reported 00:10:44.473 Active Power: Not Reported 00:10:44.473 Non-Operational Permissive Mode: Not Supported 00:10:44.473 00:10:44.473 Health Information 00:10:44.473 ================== 00:10:44.473 Critical Warnings: 00:10:44.473 Available Spare Space: OK 00:10:44.473 Temperature: OK 00:10:44.473 Device Reliability: OK 00:10:44.473 Read Only: No 00:10:44.473 Volatile Memory Backup: OK 00:10:44.473 Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.473 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:44.473 Available Spare: 0% 00:10:44.473 Available Spare Threshold: 0% 00:10:44.473 Life Percentage Used: 0% 00:10:44.473 Data Units Read: 773 00:10:44.473 Data Units Written: 664 00:10:44.473 Host Read Commands: 34297 00:10:44.473 Host Write Commands: 33335 00:10:44.473 Controller Busy Time: 0 minutes 00:10:44.473 Power Cycles: 0 00:10:44.473 Power On Hours: 0 hours 00:10:44.473 Unsafe Shutdowns: 0 00:10:44.473 Unrecoverable Media Errors: 0 00:10:44.473 Lifetime Error Log Entries: 0 00:10:44.473 Warning Temperature Time: 0 minutes 00:10:44.473 Critical Temperature Time: 0 minutes 00:10:44.473 00:10:44.473 Number of Queues 00:10:44.473 ================ 00:10:44.473 Number of I/O Submission Queues: 64 00:10:44.473 Number of I/O Completion Queues: 64 00:10:44.473 00:10:44.473 ZNS Specific Controller Data 00:10:44.473 ============================ 00:10:44.473 Zone Append Size Limit: 0 00:10:44.473 00:10:44.473 00:10:44.473 Active Namespaces 00:10:44.473 ================= 00:10:44.473 Namespace ID:1 00:10:44.473 Error Recovery Timeout: Unlimited 00:10:44.473 Command Set Identifier: NVM (00h) 00:10:44.473 Deallocate: Supported 00:10:44.473 Deallocated/Unwritten Error: Supported 00:10:44.473 Deallocated Read Value: All 0x00 00:10:44.473 Deallocate in Write Zeroes: Not Supported 00:10:44.473 Deallocated Guard Field: 0xFFFF 00:10:44.473 Flush: Supported 00:10:44.473 Reservation: Not Supported 00:10:44.473 Metadata Transferred as: Separate Metadata Buffer 00:10:44.473 Namespace Sharing Capabilities: Private 00:10:44.473 Size (in LBAs): 1548666 (5GiB) 00:10:44.473 Capacity (in LBAs): 1548666 (5GiB) 00:10:44.473 Utilization (in LBAs): 1548666 (5GiB) 00:10:44.473 Thin Provisioning: Not Supported 00:10:44.473 Per-NS Atomic Units: No 00:10:44.473 Maximum Single Source Range Length: 128 00:10:44.473 Maximum Copy Length: 128 00:10:44.473 Maximum Source Range Count: 128 00:10:44.473 NGUID/EUI64 Never Reused: No 00:10:44.473 Namespace Write Protected: No 00:10:44.473 Number of LBA Formats: 8 00:10:44.473 Current LBA Format: LBA Format #07 00:10:44.473 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:44.473 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:44.473 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:44.473 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:44.473 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:44.473 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:44.473 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:44.473 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:44.473 00:10:44.473 NVM Specific Namespace Data 00:10:44.473 =========================== 00:10:44.473 Logical Block Storage Tag Mask: 0 00:10:44.473 Protection Information Capabilities: 00:10:44.473 16b Guard Protection Information Storage Tag Support: No 00:10:44.473 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:44.473 Storage Tag Check Read Support: No 00:10:44.473 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.473 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.473 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.473 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.473 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.473 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.473 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.473 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.473 ===================================================== 00:10:44.473 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:44.473 ===================================================== 00:10:44.473 Controller Capabilities/Features 00:10:44.473 ================================ 00:10:44.473 Vendor ID: 1b36 00:10:44.473 Subsystem Vendor ID: 1af4 00:10:44.473 Serial Number: 12341 00:10:44.473 Model Number: QEMU NVMe Ctrl 00:10:44.473 Firmware Version: 8.0.0 00:10:44.473 Recommended Arb Burst: 6 00:10:44.473 IEEE OUI Identifier: 00 54 52 00:10:44.473 Multi-path I/O 00:10:44.473 May have multiple subsystem ports: No 00:10:44.473 May have multiple controllers: No 00:10:44.473 Associated with SR-IOV VF: No 00:10:44.473 Max Data Transfer Size: 524288 00:10:44.473 Max Number of Namespaces: 256 00:10:44.473 Max Number of I/O Queues: 64 00:10:44.473 NVMe Specification Version (VS): 1.4 00:10:44.473 NVMe Specification Version (Identify): 1.4 00:10:44.473 Maximum Queue Entries: 2048 00:10:44.473 Contiguous Queues Required: Yes 00:10:44.473 Arbitration Mechanisms Supported 00:10:44.473 Weighted Round Robin: Not Supported 00:10:44.473 Vendor Specific: Not Supported 00:10:44.473 Reset Timeout: 7500 ms 00:10:44.473 Doorbell Stride: 4 bytes 00:10:44.473 NVM Subsystem Reset: Not Supported 00:10:44.473 Command Sets Supported 00:10:44.473 NVM Command Set: Supported 00:10:44.473 Boot Partition: Not Supported 00:10:44.473 Memory Page Size Minimum: 4096 bytes 00:10:44.473 Memory Page Size Maximum: 65536 bytes 00:10:44.473 Persistent Memory Region: Not Supported 00:10:44.473 Optional Asynchronous Events Supported 00:10:44.473 Namespace Attribute Notices: Supported 00:10:44.473 Firmware Activation Notices: Not Supported 00:10:44.473 ANA Change Notices: Not Supported 00:10:44.473 PLE Aggregate Log Change Notices: Not Supported 00:10:44.473 LBA Status Info Alert Notices: Not Supported 00:10:44.473 EGE Aggregate Log Change Notices: Not Supported 00:10:44.473 Normal NVM Subsystem Shutdown event: Not Supported 00:10:44.473 Zone Descriptor Change Notices: Not Supported 00:10:44.473 Discovery Log Change Notices: Not Supported 00:10:44.473 Controller Attributes 00:10:44.473 128-bit Host Identifier: Not Supported 00:10:44.473 Non-Operational Permissive Mode: Not Supported 00:10:44.473 NVM Sets: Not Supported 00:10:44.473 Read Recovery Levels: Not Supported 00:10:44.473 Endurance Groups: Not Supported 00:10:44.473 Predictable Latency Mode: Not Supported 00:10:44.473 Traffic Based Keep ALive: Not Supported 00:10:44.473 Namespace Granularity: Not Supported 00:10:44.473 SQ Associations: Not Supported 00:10:44.473 UUID List: Not Supported 00:10:44.473 Multi-Domain Subsystem: Not Supported 00:10:44.473 Fixed Capacity Management: Not Supported 00:10:44.473 Variable Capacity Management: Not Supported 00:10:44.473 Delete Endurance Group: Not Supported 00:10:44.473 Delete NVM Set: Not Supported 00:10:44.473 Extended LBA Formats Supported: Supported 00:10:44.473 Flexible Data Placement Supported: Not Supported 00:10:44.473 00:10:44.473 Controller Memory Buffer Support 00:10:44.473 ================================ 00:10:44.473 Supported: No 00:10:44.473 00:10:44.473 Persistent Memory Region Support 00:10:44.473 ================================ 00:10:44.473 Supported: No 00:10:44.473 00:10:44.473 Admin Command Set Attributes 00:10:44.473 ============================ 00:10:44.473 Security Send/Receive: Not Supported 00:10:44.473 Format NVM: Supported 00:10:44.473 Firmware Activate/Download: Not Supported 00:10:44.473 Namespace Management: Supported 00:10:44.473 Device Self-Test: Not Supported 00:10:44.473 Directives: Supported 00:10:44.473 NVMe-MI: Not Supported 00:10:44.473 Virtualization Management: Not Supported 00:10:44.473 Doorbell Buffer Config: Supported 00:10:44.473 Get LBA Status Capability: Not Supported 00:10:44.473 Command & Feature Lockdown Capability: Not Supported 00:10:44.473 Abort Command Limit: 4 00:10:44.473 Async Event Request Limit: 4 00:10:44.473 Number of Firmware Slots: N/A 00:10:44.473 Firmware Slot 1 Read-Only: N/A 00:10:44.473 Firmware Activation Without Reset: N/A 00:10:44.473 Multiple Update Detection Support: N/A 00:10:44.473 Firmware Update Granularity: No Information Provided 00:10:44.473 Per-Namespace SMART Log: Yes 00:10:44.473 Asymmetric Namespace Access Log Page: Not Supported 00:10:44.473 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:44.473 Command Effects Log Page: Supported 00:10:44.473 Get Log Page Extended Data: Supported 00:10:44.473 Telemetry Log Pages: Not Supported 00:10:44.473 Persistent Event Log Pages: Not Supported 00:10:44.473 Supported Log Pages Log Page: May Support 00:10:44.474 Commands Supported & Effects Log Page: Not Supported 00:10:44.474 Feature Identifiers & Effects Log Page:May Support 00:10:44.474 NVMe-MI Commands & Effects Log Page: May Support 00:10:44.474 Data Area 4 for Telemetry Log: Not Supported 00:10:44.474 Error Log Page Entries Supported: 1 00:10:44.474 Keep Alive: Not Supported 00:10:44.474 00:10:44.474 NVM Command Set Attributes 00:10:44.474 ========================== 00:10:44.474 Submission Queue Entry Size 00:10:44.474 Max: 64 00:10:44.474 Min: 64 00:10:44.474 Completion Queue Entry Size 00:10:44.474 Max: 16 00:10:44.474 Min: 16 00:10:44.474 Number of Namespaces: 256 00:10:44.474 Compare Command: Supported 00:10:44.474 Write Uncorrectable Command: Not Supported 00:10:44.474 Dataset Management Command: Supported 00:10:44.474 Write Zeroes Command: Supported 00:10:44.474 Set Features Save Field: Supported 00:10:44.474 Reservations: Not Supported 00:10:44.474 Timestamp: Supported 00:10:44.474 Copy: Supported 00:10:44.474 Volatile Write Cache: Present 00:10:44.474 Atomic Write Unit (Normal): 1 00:10:44.474 Atomic Write Unit (PFail): 1 00:10:44.474 Atomic Compare & Write Unit: 1 00:10:44.474 Fused Compare & Write: Not Supported 00:10:44.474 Scatter-Gather List 00:10:44.474 SGL Command Set: Supported 00:10:44.474 SGL Keyed: Not Supported 00:10:44.474 SGL Bit Bucket Descriptor: Not Supported 00:10:44.474 SGL Metadata Pointer: Not Supported 00:10:44.474 Oversized SGL: Not Supported 00:10:44.474 SGL Metadata Address: Not Supported 00:10:44.474 SGL Offset: Not Supported 00:10:44.474 Transport SGL Data Block: Not Supported 00:10:44.474 Replay Protected Memory Block: Not Supported 00:10:44.474 00:10:44.474 Firmware Slot Information 00:10:44.474 ========================= 00:10:44.474 Active slot: 1 00:10:44.474 Slot 1 Firmware Revision: 1.0 00:10:44.474 00:10:44.474 00:10:44.474 Commands Supported and Effects 00:10:44.474 ============================== 00:10:44.474 Admin Commands 00:10:44.474 -------------- 00:10:44.474 Delete I/O Submission Queue (00h): Supported 00:10:44.474 Create I/O Submission Queue (01h): Supported 00:10:44.474 Get Log Page (02h): Supported 00:10:44.474 Delete I/O Completion Queue (04h): Supported 00:10:44.474 Create I/O Completion Queue (05h): Supported 00:10:44.474 Identify (06h): Supported 00:10:44.474 Abort (08h): Supported 00:10:44.474 Set Features (09h): Supported 00:10:44.474 Get Features (0Ah): Supported 00:10:44.474 Asynchronous Event Request (0Ch): Supported 00:10:44.474 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:44.474 Directive Send (19h): Supported 00:10:44.474 Directive Receive (1Ah): Supported 00:10:44.474 Virtualization Management (1Ch): Supported 00:10:44.474 Doorbell Buffer Config (7Ch): Supported 00:10:44.474 Format NVM (80h): Supported LBA-Change 00:10:44.474 I/O Commands 00:10:44.474 ------------ 00:10:44.474 Flush (00h): Supported LBA-Change 00:10:44.474 Write (01h): Supported LBA-Change 00:10:44.474 Read (02h): Supported 00:10:44.474 Compare (05h): Supported 00:10:44.474 Write Zeroes (08h): Supported LBA-Change 00:10:44.474 Dataset Management (09h): Supported LBA-Change 00:10:44.474 Unknown (0Ch): Supported 00:10:44.474 Unknown (12h): Supported 00:10:44.474 Copy (19h): Supported LBA-Change 00:10:44.474 Unknown (1Dh): Supported LBA-Change 00:10:44.474 00:10:44.474 Error Log 00:10:44.474 ========= 00:10:44.474 00:10:44.474 Arbitration 00:10:44.474 =========== 00:10:44.474 Arbitration Burst: no limit 00:10:44.474 00:10:44.474 Power Management 00:10:44.474 ================ 00:10:44.474 Number of Power States: 1 00:10:44.474 Current Power State: Power State #0 00:10:44.474 Power State #0: 00:10:44.474 Max Power: 25.00 W 00:10:44.474 Non-Operational State: Operational 00:10:44.474 Entry Latency: 16 microseconds 00:10:44.474 Exit Latency: 4 microseconds 00:10:44.474 Relative Read Throughput: 0 00:10:44.474 Relative Read Latency: 0 00:10:44.474 Relative Write Throughput: 0 00:10:44.474 Relative Write Latency: 0 00:10:44.474 Idle Power: Not Reported 00:10:44.474 Active Power: Not Reported 00:10:44.474 Non-Operational Permissive Mode: Not Supported 00:10:44.474 00:10:44.474 Health Information 00:10:44.474 ================== 00:10:44.474 Critical Warnings: 00:10:44.474 Available Spare Space: OK 00:10:44.474 Temperature: [2024-07-15 15:06:22.448114] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 70116 terminated unexpected 00:10:44.474 OK 00:10:44.474 Device Reliability: OK 00:10:44.474 Read Only: No 00:10:44.474 Volatile Memory Backup: OK 00:10:44.474 Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.474 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:44.474 Available Spare: 0% 00:10:44.474 Available Spare Threshold: 0% 00:10:44.474 Life Percentage Used: 0% 00:10:44.474 Data Units Read: 1196 00:10:44.474 Data Units Written: 983 00:10:44.474 Host Read Commands: 50071 00:10:44.474 Host Write Commands: 47168 00:10:44.474 Controller Busy Time: 0 minutes 00:10:44.474 Power Cycles: 0 00:10:44.474 Power On Hours: 0 hours 00:10:44.474 Unsafe Shutdowns: 0 00:10:44.474 Unrecoverable Media Errors: 0 00:10:44.474 Lifetime Error Log Entries: 0 00:10:44.474 Warning Temperature Time: 0 minutes 00:10:44.474 Critical Temperature Time: 0 minutes 00:10:44.474 00:10:44.474 Number of Queues 00:10:44.474 ================ 00:10:44.474 Number of I/O Submission Queues: 64 00:10:44.474 Number of I/O Completion Queues: 64 00:10:44.474 00:10:44.474 ZNS Specific Controller Data 00:10:44.474 ============================ 00:10:44.474 Zone Append Size Limit: 0 00:10:44.474 00:10:44.474 00:10:44.474 Active Namespaces 00:10:44.474 ================= 00:10:44.474 Namespace ID:1 00:10:44.474 Error Recovery Timeout: Unlimited 00:10:44.474 Command Set Identifier: NVM (00h) 00:10:44.474 Deallocate: Supported 00:10:44.474 Deallocated/Unwritten Error: Supported 00:10:44.474 Deallocated Read Value: All 0x00 00:10:44.474 Deallocate in Write Zeroes: Not Supported 00:10:44.474 Deallocated Guard Field: 0xFFFF 00:10:44.474 Flush: Supported 00:10:44.474 Reservation: Not Supported 00:10:44.474 Namespace Sharing Capabilities: Private 00:10:44.474 Size (in LBAs): 1310720 (5GiB) 00:10:44.474 Capacity (in LBAs): 1310720 (5GiB) 00:10:44.474 Utilization (in LBAs): 1310720 (5GiB) 00:10:44.474 Thin Provisioning: Not Supported 00:10:44.474 Per-NS Atomic Units: No 00:10:44.474 Maximum Single Source Range Length: 128 00:10:44.474 Maximum Copy Length: 128 00:10:44.474 Maximum Source Range Count: 128 00:10:44.474 NGUID/EUI64 Never Reused: No 00:10:44.474 Namespace Write Protected: No 00:10:44.474 Number of LBA Formats: 8 00:10:44.474 Current LBA Format: LBA Format #04 00:10:44.474 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:44.474 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:44.474 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:44.474 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:44.474 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:44.474 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:44.474 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:44.474 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:44.474 00:10:44.474 NVM Specific Namespace Data 00:10:44.474 =========================== 00:10:44.474 Logical Block Storage Tag Mask: 0 00:10:44.474 Protection Information Capabilities: 00:10:44.474 16b Guard Protection Information Storage Tag Support: No 00:10:44.474 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:44.474 Storage Tag Check Read Support: No 00:10:44.474 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.474 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.474 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.474 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.474 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.474 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.474 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.474 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.474 ===================================================== 00:10:44.474 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:44.474 ===================================================== 00:10:44.474 Controller Capabilities/Features 00:10:44.474 ================================ 00:10:44.474 Vendor ID: 1b36 00:10:44.474 Subsystem Vendor ID: 1af4 00:10:44.474 Serial Number: 12343 00:10:44.474 Model Number: QEMU NVMe Ctrl 00:10:44.474 Firmware Version: 8.0.0 00:10:44.474 Recommended Arb Burst: 6 00:10:44.474 IEEE OUI Identifier: 00 54 52 00:10:44.474 Multi-path I/O 00:10:44.474 May have multiple subsystem ports: No 00:10:44.474 May have multiple controllers: Yes 00:10:44.474 Associated with SR-IOV VF: No 00:10:44.474 Max Data Transfer Size: 524288 00:10:44.474 Max Number of Namespaces: 256 00:10:44.474 Max Number of I/O Queues: 64 00:10:44.474 NVMe Specification Version (VS): 1.4 00:10:44.474 NVMe Specification Version (Identify): 1.4 00:10:44.474 Maximum Queue Entries: 2048 00:10:44.474 Contiguous Queues Required: Yes 00:10:44.474 Arbitration Mechanisms Supported 00:10:44.474 Weighted Round Robin: Not Supported 00:10:44.474 Vendor Specific: Not Supported 00:10:44.474 Reset Timeout: 7500 ms 00:10:44.474 Doorbell Stride: 4 bytes 00:10:44.474 NVM Subsystem Reset: Not Supported 00:10:44.474 Command Sets Supported 00:10:44.474 NVM Command Set: Supported 00:10:44.474 Boot Partition: Not Supported 00:10:44.474 Memory Page Size Minimum: 4096 bytes 00:10:44.474 Memory Page Size Maximum: 65536 bytes 00:10:44.474 Persistent Memory Region: Not Supported 00:10:44.474 Optional Asynchronous Events Supported 00:10:44.474 Namespace Attribute Notices: Supported 00:10:44.474 Firmware Activation Notices: Not Supported 00:10:44.474 ANA Change Notices: Not Supported 00:10:44.474 PLE Aggregate Log Change Notices: Not Supported 00:10:44.474 LBA Status Info Alert Notices: Not Supported 00:10:44.474 EGE Aggregate Log Change Notices: Not Supported 00:10:44.474 Normal NVM Subsystem Shutdown event: Not Supported 00:10:44.474 Zone Descriptor Change Notices: Not Supported 00:10:44.474 Discovery Log Change Notices: Not Supported 00:10:44.474 Controller Attributes 00:10:44.474 128-bit Host Identifier: Not Supported 00:10:44.474 Non-Operational Permissive Mode: Not Supported 00:10:44.474 NVM Sets: Not Supported 00:10:44.474 Read Recovery Levels: Not Supported 00:10:44.474 Endurance Groups: Supported 00:10:44.474 Predictable Latency Mode: Not Supported 00:10:44.474 Traffic Based Keep ALive: Not Supported 00:10:44.474 Namespace Granularity: Not Supported 00:10:44.474 SQ Associations: Not Supported 00:10:44.474 UUID List: Not Supported 00:10:44.474 Multi-Domain Subsystem: Not Supported 00:10:44.474 Fixed Capacity Management: Not Supported 00:10:44.474 Variable Capacity Management: Not Supported 00:10:44.474 Delete Endurance Group: Not Supported 00:10:44.474 Delete NVM Set: Not Supported 00:10:44.474 Extended LBA Formats Supported: Supported 00:10:44.474 Flexible Data Placement Supported: Supported 00:10:44.474 00:10:44.474 Controller Memory Buffer Support 00:10:44.474 ================================ 00:10:44.474 Supported: No 00:10:44.474 00:10:44.474 Persistent Memory Region Support 00:10:44.474 ================================ 00:10:44.474 Supported: No 00:10:44.474 00:10:44.474 Admin Command Set Attributes 00:10:44.474 ============================ 00:10:44.474 Security Send/Receive: Not Supported 00:10:44.474 Format NVM: Supported 00:10:44.474 Firmware Activate/Download: Not Supported 00:10:44.474 Namespace Management: Supported 00:10:44.474 Device Self-Test: Not Supported 00:10:44.474 Directives: Supported 00:10:44.474 NVMe-MI: Not Supported 00:10:44.474 Virtualization Management: Not Supported 00:10:44.474 Doorbell Buffer Config: Supported 00:10:44.474 Get LBA Status Capability: Not Supported 00:10:44.474 Command & Feature Lockdown Capability: Not Supported 00:10:44.474 Abort Command Limit: 4 00:10:44.474 Async Event Request Limit: 4 00:10:44.474 Number of Firmware Slots: N/A 00:10:44.474 Firmware Slot 1 Read-Only: N/A 00:10:44.474 Firmware Activation Without Reset: N/A 00:10:44.474 Multiple Update Detection Support: N/A 00:10:44.474 Firmware Update Granularity: No Information Provided 00:10:44.474 Per-Namespace SMART Log: Yes 00:10:44.474 Asymmetric Namespace Access Log Page: Not Supported 00:10:44.474 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:44.474 Command Effects Log Page: Supported 00:10:44.474 Get Log Page Extended Data: Supported 00:10:44.474 Telemetry Log Pages: Not Supported 00:10:44.474 Persistent Event Log Pages: Not Supported 00:10:44.474 Supported Log Pages Log Page: May Support 00:10:44.474 Commands Supported & Effects Log Page: Not Supported 00:10:44.474 Feature Identifiers & Effects Log Page:May Support 00:10:44.474 NVMe-MI Commands & Effects Log Page: May Support 00:10:44.474 Data Area 4 for Telemetry Log: Not Supported 00:10:44.474 Error Log Page Entries Supported: 1 00:10:44.474 Keep Alive: Not Supported 00:10:44.474 00:10:44.474 NVM Command Set Attributes 00:10:44.474 ========================== 00:10:44.474 Submission Queue Entry Size 00:10:44.474 Max: 64 00:10:44.474 Min: 64 00:10:44.474 Completion Queue Entry Size 00:10:44.474 Max: 16 00:10:44.474 Min: 16 00:10:44.474 Number of Namespaces: 256 00:10:44.474 Compare Command: Supported 00:10:44.474 Write Uncorrectable Command: Not Supported 00:10:44.474 Dataset Management Command: Supported 00:10:44.474 Write Zeroes Command: Supported 00:10:44.474 Set Features Save Field: Supported 00:10:44.474 Reservations: Not Supported 00:10:44.474 Timestamp: Supported 00:10:44.475 Copy: Supported 00:10:44.475 Volatile Write Cache: Present 00:10:44.475 Atomic Write Unit (Normal): 1 00:10:44.475 Atomic Write Unit (PFail): 1 00:10:44.475 Atomic Compare & Write Unit: 1 00:10:44.475 Fused Compare & Write: Not Supported 00:10:44.475 Scatter-Gather List 00:10:44.475 SGL Command Set: Supported 00:10:44.475 SGL Keyed: Not Supported 00:10:44.475 SGL Bit Bucket Descriptor: Not Supported 00:10:44.475 SGL Metadata Pointer: Not Supported 00:10:44.475 Oversized SGL: Not Supported 00:10:44.475 SGL Metadata Address: Not Supported 00:10:44.475 SGL Offset: Not Supported 00:10:44.475 Transport SGL Data Block: Not Supported 00:10:44.475 Replay Protected Memory Block: Not Supported 00:10:44.475 00:10:44.475 Firmware Slot Information 00:10:44.475 ========================= 00:10:44.475 Active slot: 1 00:10:44.475 Slot 1 Firmware Revision: 1.0 00:10:44.475 00:10:44.475 00:10:44.475 Commands Supported and Effects 00:10:44.475 ============================== 00:10:44.475 Admin Commands 00:10:44.475 -------------- 00:10:44.475 Delete I/O Submission Queue (00h): Supported 00:10:44.475 Create I/O Submission Queue (01h): Supported 00:10:44.475 Get Log Page (02h): Supported 00:10:44.475 Delete I/O Completion Queue (04h): Supported 00:10:44.475 Create I/O Completion Queue (05h): Supported 00:10:44.475 Identify (06h): Supported 00:10:44.475 Abort (08h): Supported 00:10:44.475 Set Features (09h): Supported 00:10:44.475 Get Features (0Ah): Supported 00:10:44.475 Asynchronous Event Request (0Ch): Supported 00:10:44.475 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:44.475 Directive Send (19h): Supported 00:10:44.475 Directive Receive (1Ah): Supported 00:10:44.475 Virtualization Management (1Ch): Supported 00:10:44.475 Doorbell Buffer Config (7Ch): Supported 00:10:44.475 Format NVM (80h): Supported LBA-Change 00:10:44.475 I/O Commands 00:10:44.475 ------------ 00:10:44.475 Flush (00h): Supported LBA-Change 00:10:44.475 Write (01h): Supported LBA-Change 00:10:44.475 Read (02h): Supported 00:10:44.475 Compare (05h): Supported 00:10:44.475 Write Zeroes (08h): Supported LBA-Change 00:10:44.475 Dataset Management (09h): Supported LBA-Change 00:10:44.475 Unknown (0Ch): Supported 00:10:44.475 Unknown (12h): Supported 00:10:44.475 Copy (19h): Supported LBA-Change 00:10:44.475 Unknown (1Dh): Supported LBA-Change 00:10:44.475 00:10:44.475 Error Log 00:10:44.475 ========= 00:10:44.475 00:10:44.475 Arbitration 00:10:44.475 =========== 00:10:44.475 Arbitration Burst: no limit 00:10:44.475 00:10:44.475 Power Management 00:10:44.475 ================ 00:10:44.475 Number of Power States: 1 00:10:44.475 Current Power State: Power State #0 00:10:44.475 Power State #0: 00:10:44.475 Max Power: 25.00 W 00:10:44.475 Non-Operational State: Operational 00:10:44.475 Entry Latency: 16 microseconds 00:10:44.475 Exit Latency: 4 microseconds 00:10:44.475 Relative Read Throughput: 0 00:10:44.475 Relative Read Latency: 0 00:10:44.475 Relative Write Throughput: 0 00:10:44.475 Relative Write Latency: 0 00:10:44.475 Idle Power: Not Reported 00:10:44.475 Active Power: Not Reported 00:10:44.475 Non-Operational Permissive Mode: Not Supported 00:10:44.475 00:10:44.475 Health Information 00:10:44.475 ================== 00:10:44.475 Critical Warnings: 00:10:44.475 Available Spare Space: OK 00:10:44.475 Temperature: OK 00:10:44.475 Device Reliability: OK 00:10:44.475 Read Only: No 00:10:44.475 Volatile Memory Backup: OK 00:10:44.475 Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.475 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:44.475 Available Spare: 0% 00:10:44.475 Available Spare Threshold: 0% 00:10:44.475 Life Percentage Used: 0% 00:10:44.475 Data Units Read: 1031 00:10:44.475 Data Units Written: 925 00:10:44.475 Host Read Commands: 36830 00:10:44.475 Host Write Commands: 35420 00:10:44.475 Controller Busy Time: 0 minutes 00:10:44.475 Power Cycles: 0 00:10:44.475 Power On Hours: 0 hours 00:10:44.475 Unsafe Shutdowns: 0 00:10:44.475 Unrecoverable Media Errors: 0 00:10:44.475 Lifetime Error Log Entries: 0 00:10:44.475 Warning Temperature Time: 0 minutes 00:10:44.475 Critical Temperature Time: 0 minutes 00:10:44.475 00:10:44.475 Number of Queues 00:10:44.475 ================ 00:10:44.475 Number of I/O Submission Queues: 64 00:10:44.475 Number of I/O Completion Queues: 64 00:10:44.475 00:10:44.475 ZNS Specific Controller Data 00:10:44.475 ============================ 00:10:44.475 Zone Append Size Limit: 0 00:10:44.475 00:10:44.475 00:10:44.475 Active Namespaces 00:10:44.475 ================= 00:10:44.475 Namespace ID:1 00:10:44.475 Error Recovery Timeout: Unlimited 00:10:44.475 Command Set Identifier: NVM (00h) 00:10:44.475 Deallocate: Supported 00:10:44.475 Deallocated/Unwritten Error: Supported 00:10:44.475 Deallocated Read Value: All 0x00 00:10:44.475 Deallocate in Write Zeroes: Not Supported 00:10:44.475 Deallocated Guard Field: 0xFFFF 00:10:44.475 Flush: Supported 00:10:44.475 Reservation: Not Supported 00:10:44.475 Namespace Sharing Capabilities: Multiple Controllers 00:10:44.475 Size (in LBAs): 262144 (1GiB) 00:10:44.475 Capacity (in LBAs): 262144 (1GiB) 00:10:44.475 Utilization (in LBAs): 262144 (1GiB) 00:10:44.475 Thin Provisioning: Not Supported 00:10:44.475 Per-NS Atomic Units: No 00:10:44.475 Maximum Single Source Range Length: 128 00:10:44.475 Maximum Copy Length: 128 00:10:44.475 Maximum Source Range Count: 128 00:10:44.475 NGUID/EUI64 Never Reused: No 00:10:44.475 Namespace Write Protected: No 00:10:44.475 Endurance group ID: 1 00:10:44.475 Number of LBA Formats: 8 00:10:44.475 Current LBA Format: LBA Format #04 00:10:44.475 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:44.475 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:44.475 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:44.475 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:44.475 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:44.475 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:44.475 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:44.475 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:44.475 00:10:44.475 Get Feature FDP: 00:10:44.475 ================ 00:10:44.475 Enabled: Yes 00:10:44.475 FDP configuration index: 0 00:10:44.475 00:10:44.475 FDP configurations log page 00:10:44.475 =========================== 00:10:44.475 Number of FDP configurations: 1 00:10:44.475 Version: 0 00:10:44.475 Size: 112 00:10:44.475 FDP Configuration Descriptor: 0 00:10:44.475 Descriptor Size: 96 00:10:44.475 Reclaim Group Identifier format: 2 00:10:44.475 FDP Volatile Write Cache: Not Present 00:10:44.475 FDP Configuration: Valid 00:10:44.475 Vendor Specific Size: 0 00:10:44.475 Number of Reclaim Groups: 2 00:10:44.475 Number of Recalim Unit Handles: 8 00:10:44.475 Max Placement Identifiers: 128 00:10:44.475 Number of Namespaces Suppprted: 256 00:10:44.475 Reclaim unit Nominal Size: 6000000 bytes 00:10:44.475 Estimated Reclaim Unit Time Limit: Not Reported 00:10:44.475 RUH Desc #000: RUH Type: Initially Isolated 00:10:44.475 RUH Desc #001: RUH Type: Initially Isolated 00:10:44.475 RUH Desc #002: RUH Type: Initially Isolated 00:10:44.475 RUH Desc #003: RUH Type: Initially Isolated 00:10:44.475 RUH Desc #004: RUH Type: Initially Isolated 00:10:44.475 RUH Desc #005: RUH Type: Initially Isolated 00:10:44.475 RUH Desc #006: RUH Type: Initially Isolated 00:10:44.475 RUH Desc #007: RUH Type: Initially Isolated 00:10:44.475 00:10:44.475 FDP reclaim unit handle usage log page 00:10:44.475 ====================================== 00:10:44.475 Number of Reclaim Unit Handles: 8 00:10:44.475 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:44.475 RUH Usage Desc #001: RUH Attributes: Unused 00:10:44.475 RUH Usage Desc #002: RUH Attributes: Unused 00:10:44.475 RUH Usage Desc #003: RUH Attributes: Unused 00:10:44.475 RUH Usage Desc #004: RUH Attributes: Unused 00:10:44.475 RUH Usage Desc #005: RUH Attributes: Unused 00:10:44.475 RUH Usage Desc #006: RUH Attributes: Unused 00:10:44.475 RUH Usage Desc #007: RUH Attributes: Unused 00:10:44.475 00:10:44.475 FDP statistics log page 00:10:44.475 ======================= 00:10:44.475 Host bytes with metadata written: 572366848 00:10:44.475 Med[2024-07-15 15:06:22.449609] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 70116 terminated unexpected 00:10:44.475 ia bytes with metadata written: 572444672 00:10:44.475 Media bytes erased: 0 00:10:44.475 00:10:44.475 FDP events log page 00:10:44.475 =================== 00:10:44.475 Number of FDP events: 0 00:10:44.475 00:10:44.475 NVM Specific Namespace Data 00:10:44.475 =========================== 00:10:44.475 Logical Block Storage Tag Mask: 0 00:10:44.475 Protection Information Capabilities: 00:10:44.475 16b Guard Protection Information Storage Tag Support: No 00:10:44.475 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:44.475 Storage Tag Check Read Support: No 00:10:44.475 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.475 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.475 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.475 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.475 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.475 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.475 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.475 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.475 ===================================================== 00:10:44.475 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:44.475 ===================================================== 00:10:44.475 Controller Capabilities/Features 00:10:44.475 ================================ 00:10:44.475 Vendor ID: 1b36 00:10:44.475 Subsystem Vendor ID: 1af4 00:10:44.475 Serial Number: 12342 00:10:44.475 Model Number: QEMU NVMe Ctrl 00:10:44.475 Firmware Version: 8.0.0 00:10:44.475 Recommended Arb Burst: 6 00:10:44.475 IEEE OUI Identifier: 00 54 52 00:10:44.475 Multi-path I/O 00:10:44.475 May have multiple subsystem ports: No 00:10:44.475 May have multiple controllers: No 00:10:44.475 Associated with SR-IOV VF: No 00:10:44.475 Max Data Transfer Size: 524288 00:10:44.475 Max Number of Namespaces: 256 00:10:44.475 Max Number of I/O Queues: 64 00:10:44.475 NVMe Specification Version (VS): 1.4 00:10:44.475 NVMe Specification Version (Identify): 1.4 00:10:44.475 Maximum Queue Entries: 2048 00:10:44.475 Contiguous Queues Required: Yes 00:10:44.475 Arbitration Mechanisms Supported 00:10:44.475 Weighted Round Robin: Not Supported 00:10:44.475 Vendor Specific: Not Supported 00:10:44.475 Reset Timeout: 7500 ms 00:10:44.475 Doorbell Stride: 4 bytes 00:10:44.475 NVM Subsystem Reset: Not Supported 00:10:44.475 Command Sets Supported 00:10:44.475 NVM Command Set: Supported 00:10:44.475 Boot Partition: Not Supported 00:10:44.475 Memory Page Size Minimum: 4096 bytes 00:10:44.475 Memory Page Size Maximum: 65536 bytes 00:10:44.475 Persistent Memory Region: Not Supported 00:10:44.475 Optional Asynchronous Events Supported 00:10:44.475 Namespace Attribute Notices: Supported 00:10:44.475 Firmware Activation Notices: Not Supported 00:10:44.475 ANA Change Notices: Not Supported 00:10:44.475 PLE Aggregate Log Change Notices: Not Supported 00:10:44.475 LBA Status Info Alert Notices: Not Supported 00:10:44.475 EGE Aggregate Log Change Notices: Not Supported 00:10:44.475 Normal NVM Subsystem Shutdown event: Not Supported 00:10:44.475 Zone Descriptor Change Notices: Not Supported 00:10:44.475 Discovery Log Change Notices: Not Supported 00:10:44.475 Controller Attributes 00:10:44.475 128-bit Host Identifier: Not Supported 00:10:44.476 Non-Operational Permissive Mode: Not Supported 00:10:44.476 NVM Sets: Not Supported 00:10:44.476 Read Recovery Levels: Not Supported 00:10:44.476 Endurance Groups: Not Supported 00:10:44.476 Predictable Latency Mode: Not Supported 00:10:44.476 Traffic Based Keep ALive: Not Supported 00:10:44.476 Namespace Granularity: Not Supported 00:10:44.476 SQ Associations: Not Supported 00:10:44.476 UUID List: Not Supported 00:10:44.476 Multi-Domain Subsystem: Not Supported 00:10:44.476 Fixed Capacity Management: Not Supported 00:10:44.476 Variable Capacity Management: Not Supported 00:10:44.476 Delete Endurance Group: Not Supported 00:10:44.476 Delete NVM Set: Not Supported 00:10:44.476 Extended LBA Formats Supported: Supported 00:10:44.476 Flexible Data Placement Supported: Not Supported 00:10:44.476 00:10:44.476 Controller Memory Buffer Support 00:10:44.476 ================================ 00:10:44.476 Supported: No 00:10:44.476 00:10:44.476 Persistent Memory Region Support 00:10:44.476 ================================ 00:10:44.476 Supported: No 00:10:44.476 00:10:44.476 Admin Command Set Attributes 00:10:44.476 ============================ 00:10:44.476 Security Send/Receive: Not Supported 00:10:44.476 Format NVM: Supported 00:10:44.476 Firmware Activate/Download: Not Supported 00:10:44.476 Namespace Management: Supported 00:10:44.476 Device Self-Test: Not Supported 00:10:44.476 Directives: Supported 00:10:44.476 NVMe-MI: Not Supported 00:10:44.476 Virtualization Management: Not Supported 00:10:44.476 Doorbell Buffer Config: Supported 00:10:44.476 Get LBA Status Capability: Not Supported 00:10:44.476 Command & Feature Lockdown Capability: Not Supported 00:10:44.476 Abort Command Limit: 4 00:10:44.476 Async Event Request Limit: 4 00:10:44.476 Number of Firmware Slots: N/A 00:10:44.476 Firmware Slot 1 Read-Only: N/A 00:10:44.476 Firmware Activation Without Reset: N/A 00:10:44.476 Multiple Update Detection Support: N/A 00:10:44.476 Firmware Update Granularity: No Information Provided 00:10:44.476 Per-Namespace SMART Log: Yes 00:10:44.476 Asymmetric Namespace Access Log Page: Not Supported 00:10:44.476 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:44.476 Command Effects Log Page: Supported 00:10:44.476 Get Log Page Extended Data: Supported 00:10:44.476 Telemetry Log Pages: Not Supported 00:10:44.476 Persistent Event Log Pages: Not Supported 00:10:44.476 Supported Log Pages Log Page: May Support 00:10:44.476 Commands Supported & Effects Log Page: Not Supported 00:10:44.476 Feature Identifiers & Effects Log Page:May Support 00:10:44.476 NVMe-MI Commands & Effects Log Page: May Support 00:10:44.476 Data Area 4 for Telemetry Log: Not Supported 00:10:44.476 Error Log Page Entries Supported: 1 00:10:44.476 Keep Alive: Not Supported 00:10:44.476 00:10:44.476 NVM Command Set Attributes 00:10:44.476 ========================== 00:10:44.476 Submission Queue Entry Size 00:10:44.476 Max: 64 00:10:44.476 Min: 64 00:10:44.476 Completion Queue Entry Size 00:10:44.476 Max: 16 00:10:44.476 Min: 16 00:10:44.476 Number of Namespaces: 256 00:10:44.476 Compare Command: Supported 00:10:44.476 Write Uncorrectable Command: Not Supported 00:10:44.476 Dataset Management Command: Supported 00:10:44.476 Write Zeroes Command: Supported 00:10:44.476 Set Features Save Field: Supported 00:10:44.476 Reservations: Not Supported 00:10:44.476 Timestamp: Supported 00:10:44.476 Copy: Supported 00:10:44.476 Volatile Write Cache: Present 00:10:44.476 Atomic Write Unit (Normal): 1 00:10:44.476 Atomic Write Unit (PFail): 1 00:10:44.476 Atomic Compare & Write Unit: 1 00:10:44.476 Fused Compare & Write: Not Supported 00:10:44.476 Scatter-Gather List 00:10:44.476 SGL Command Set: Supported 00:10:44.476 SGL Keyed: Not Supported 00:10:44.476 SGL Bit Bucket Descriptor: Not Supported 00:10:44.476 SGL Metadata Pointer: Not Supported 00:10:44.476 Oversized SGL: Not Supported 00:10:44.476 SGL Metadata Address: Not Supported 00:10:44.476 SGL Offset: Not Supported 00:10:44.476 Transport SGL Data Block: Not Supported 00:10:44.476 Replay Protected Memory Block: Not Supported 00:10:44.476 00:10:44.476 Firmware Slot Information 00:10:44.476 ========================= 00:10:44.476 Active slot: 1 00:10:44.476 Slot 1 Firmware Revision: 1.0 00:10:44.476 00:10:44.476 00:10:44.476 Commands Supported and Effects 00:10:44.476 ============================== 00:10:44.476 Admin Commands 00:10:44.476 -------------- 00:10:44.476 Delete I/O Submission Queue (00h): Supported 00:10:44.476 Create I/O Submission Queue (01h): Supported 00:10:44.476 Get Log Page (02h): Supported 00:10:44.476 Delete I/O Completion Queue (04h): Supported 00:10:44.476 Create I/O Completion Queue (05h): Supported 00:10:44.476 Identify (06h): Supported 00:10:44.476 Abort (08h): Supported 00:10:44.476 Set Features (09h): Supported 00:10:44.476 Get Features (0Ah): Supported 00:10:44.476 Asynchronous Event Request (0Ch): Supported 00:10:44.476 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:44.476 Directive Send (19h): Supported 00:10:44.476 Directive Receive (1Ah): Supported 00:10:44.476 Virtualization Management (1Ch): Supported 00:10:44.476 Doorbell Buffer Config (7Ch): Supported 00:10:44.476 Format NVM (80h): Supported LBA-Change 00:10:44.476 I/O Commands 00:10:44.476 ------------ 00:10:44.476 Flush (00h): Supported LBA-Change 00:10:44.476 Write (01h): Supported LBA-Change 00:10:44.476 Read (02h): Supported 00:10:44.476 Compare (05h): Supported 00:10:44.476 Write Zeroes (08h): Supported LBA-Change 00:10:44.476 Dataset Management (09h): Supported LBA-Change 00:10:44.476 Unknown (0Ch): Supported 00:10:44.476 Unknown (12h): Supported 00:10:44.476 Copy (19h): Supported LBA-Change 00:10:44.476 Unknown (1Dh): Supported LBA-Change 00:10:44.476 00:10:44.476 Error Log 00:10:44.476 ========= 00:10:44.476 00:10:44.476 Arbitration 00:10:44.476 =========== 00:10:44.476 Arbitration Burst: no limit 00:10:44.476 00:10:44.476 Power Management 00:10:44.476 ================ 00:10:44.476 Number of Power States: 1 00:10:44.476 Current Power State: Power State #0 00:10:44.476 Power State #0: 00:10:44.476 Max Power: 25.00 W 00:10:44.476 Non-Operational State: Operational 00:10:44.476 Entry Latency: 16 microseconds 00:10:44.476 Exit Latency: 4 microseconds 00:10:44.476 Relative Read Throughput: 0 00:10:44.476 Relative Read Latency: 0 00:10:44.476 Relative Write Throughput: 0 00:10:44.476 Relative Write Latency: 0 00:10:44.476 Idle Power: Not Reported 00:10:44.476 Active Power: Not Reported 00:10:44.476 Non-Operational Permissive Mode: Not Supported 00:10:44.476 00:10:44.476 Health Information 00:10:44.476 ================== 00:10:44.476 Critical Warnings: 00:10:44.476 Available Spare Space: OK 00:10:44.476 Temperature: OK 00:10:44.476 Device Reliability: OK 00:10:44.476 Read Only: No 00:10:44.476 Volatile Memory Backup: OK 00:10:44.476 Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.476 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:44.476 Available Spare: 0% 00:10:44.476 Available Spare Threshold: 0% 00:10:44.476 Life Percentage Used: 0% 00:10:44.476 Data Units Read: 2549 00:10:44.476 Data Units Written: 2230 00:10:44.476 Host Read Commands: 105821 00:10:44.476 Host Write Commands: 101591 00:10:44.476 Controller Busy Time: 0 minutes 00:10:44.476 Power Cycles: 0 00:10:44.476 Power On Hours: 0 hours 00:10:44.476 Unsafe Shutdowns: 0 00:10:44.476 Unrecoverable Media Errors: 0 00:10:44.476 Lifetime Error Log Entries: 0 00:10:44.476 Warning Temperature Time: 0 minutes 00:10:44.476 Critical Temperature Time: 0 minutes 00:10:44.476 00:10:44.476 Number of Queues 00:10:44.476 ================ 00:10:44.476 Number of I/O Submission Queues: 64 00:10:44.476 Number of I/O Completion Queues: 64 00:10:44.476 00:10:44.476 ZNS Specific Controller Data 00:10:44.476 ============================ 00:10:44.476 Zone Append Size Limit: 0 00:10:44.476 00:10:44.476 00:10:44.476 Active Namespaces 00:10:44.476 ================= 00:10:44.476 Namespace ID:1 00:10:44.476 Error Recovery Timeout: Unlimited 00:10:44.476 Command Set Identifier: NVM (00h) 00:10:44.476 Deallocate: Supported 00:10:44.476 Deallocated/Unwritten Error: Supported 00:10:44.476 Deallocated Read Value: All 0x00 00:10:44.476 Deallocate in Write Zeroes: Not Supported 00:10:44.476 Deallocated Guard Field: 0xFFFF 00:10:44.476 Flush: Supported 00:10:44.476 Reservation: Not Supported 00:10:44.476 Namespace Sharing Capabilities: Private 00:10:44.476 Size (in LBAs): 1048576 (4GiB) 00:10:44.476 Capacity (in LBAs): 1048576 (4GiB) 00:10:44.476 Utilization (in LBAs): 1048576 (4GiB) 00:10:44.476 Thin Provisioning: Not Supported 00:10:44.476 Per-NS Atomic Units: No 00:10:44.476 Maximum Single Source Range Length: 128 00:10:44.476 Maximum Copy Length: 128 00:10:44.476 Maximum Source Range Count: 128 00:10:44.476 NGUID/EUI64 Never Reused: No 00:10:44.476 Namespace Write Protected: No 00:10:44.476 Number of LBA Formats: 8 00:10:44.476 Current LBA Format: LBA Format #04 00:10:44.476 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:44.476 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:44.476 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:44.476 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:44.476 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:44.476 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:44.476 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:44.476 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:44.476 00:10:44.476 NVM Specific Namespace Data 00:10:44.476 =========================== 00:10:44.476 Logical Block Storage Tag Mask: 0 00:10:44.476 Protection Information Capabilities: 00:10:44.476 16b Guard Protection Information Storage Tag Support: No 00:10:44.476 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:44.476 Storage Tag Check Read Support: No 00:10:44.476 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Namespace ID:2 00:10:44.476 Error Recovery Timeout: Unlimited 00:10:44.476 Command Set Identifier: NVM (00h) 00:10:44.476 Deallocate: Supported 00:10:44.476 Deallocated/Unwritten Error: Supported 00:10:44.476 Deallocated Read Value: All 0x00 00:10:44.476 Deallocate in Write Zeroes: Not Supported 00:10:44.476 Deallocated Guard Field: 0xFFFF 00:10:44.476 Flush: Supported 00:10:44.476 Reservation: Not Supported 00:10:44.476 Namespace Sharing Capabilities: Private 00:10:44.476 Size (in LBAs): 1048576 (4GiB) 00:10:44.476 Capacity (in LBAs): 1048576 (4GiB) 00:10:44.476 Utilization (in LBAs): 1048576 (4GiB) 00:10:44.476 Thin Provisioning: Not Supported 00:10:44.476 Per-NS Atomic Units: No 00:10:44.476 Maximum Single Source Range Length: 128 00:10:44.476 Maximum Copy Length: 128 00:10:44.476 Maximum Source Range Count: 128 00:10:44.476 NGUID/EUI64 Never Reused: No 00:10:44.476 Namespace Write Protected: No 00:10:44.476 Number of LBA Formats: 8 00:10:44.476 Current LBA Format: LBA Format #04 00:10:44.476 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:44.476 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:44.476 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:44.476 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:44.476 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:44.476 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:44.476 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:44.476 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:44.476 00:10:44.476 NVM Specific Namespace Data 00:10:44.476 =========================== 00:10:44.476 Logical Block Storage Tag Mask: 0 00:10:44.476 Protection Information Capabilities: 00:10:44.476 16b Guard Protection Information Storage Tag Support: No 00:10:44.476 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:44.476 Storage Tag Check Read Support: No 00:10:44.476 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.476 Namespace ID:3 00:10:44.476 Error Recovery Timeout: Unlimited 00:10:44.476 Command Set Identifier: NVM (00h) 00:10:44.476 Deallocate: Supported 00:10:44.476 Deallocated/Unwritten Error: Supported 00:10:44.476 Deallocated Read Value: All 0x00 00:10:44.476 Deallocate in Write Zeroes: Not Supported 00:10:44.476 Deallocated Guard Field: 0xFFFF 00:10:44.476 Flush: Supported 00:10:44.476 Reservation: Not Supported 00:10:44.476 Namespace Sharing Capabilities: Private 00:10:44.476 Size (in LBAs): 1048576 (4GiB) 00:10:44.476 Capacity (in LBAs): 1048576 (4GiB) 00:10:44.476 Utilization (in LBAs): 1048576 (4GiB) 00:10:44.476 Thin Provisioning: Not Supported 00:10:44.476 Per-NS Atomic Units: No 00:10:44.476 Maximum Single Source Range Length: 128 00:10:44.476 Maximum Copy Length: 128 00:10:44.476 Maximum Source Range Count: 128 00:10:44.476 NGUID/EUI64 Never Reused: No 00:10:44.476 Namespace Write Protected: No 00:10:44.476 Number of LBA Formats: 8 00:10:44.477 Current LBA Format: LBA Format #04 00:10:44.477 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:44.477 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:44.477 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:44.477 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:44.477 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:44.477 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:44.477 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:44.477 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:44.477 00:10:44.477 NVM Specific Namespace Data 00:10:44.477 =========================== 00:10:44.477 Logical Block Storage Tag Mask: 0 00:10:44.477 Protection Information Capabilities: 00:10:44.477 16b Guard Protection Information Storage Tag Support: No 00:10:44.477 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:44.477 Storage Tag Check Read Support: No 00:10:44.477 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.477 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.477 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.477 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.477 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.477 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.477 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.477 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.477 15:06:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:44.477 15:06:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:44.735 ===================================================== 00:10:44.735 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:44.735 ===================================================== 00:10:44.735 Controller Capabilities/Features 00:10:44.735 ================================ 00:10:44.735 Vendor ID: 1b36 00:10:44.735 Subsystem Vendor ID: 1af4 00:10:44.735 Serial Number: 12340 00:10:44.735 Model Number: QEMU NVMe Ctrl 00:10:44.735 Firmware Version: 8.0.0 00:10:44.735 Recommended Arb Burst: 6 00:10:44.735 IEEE OUI Identifier: 00 54 52 00:10:44.735 Multi-path I/O 00:10:44.735 May have multiple subsystem ports: No 00:10:44.735 May have multiple controllers: No 00:10:44.735 Associated with SR-IOV VF: No 00:10:44.735 Max Data Transfer Size: 524288 00:10:44.735 Max Number of Namespaces: 256 00:10:44.735 Max Number of I/O Queues: 64 00:10:44.735 NVMe Specification Version (VS): 1.4 00:10:44.735 NVMe Specification Version (Identify): 1.4 00:10:44.735 Maximum Queue Entries: 2048 00:10:44.735 Contiguous Queues Required: Yes 00:10:44.735 Arbitration Mechanisms Supported 00:10:44.735 Weighted Round Robin: Not Supported 00:10:44.735 Vendor Specific: Not Supported 00:10:44.735 Reset Timeout: 7500 ms 00:10:44.735 Doorbell Stride: 4 bytes 00:10:44.735 NVM Subsystem Reset: Not Supported 00:10:44.735 Command Sets Supported 00:10:44.735 NVM Command Set: Supported 00:10:44.735 Boot Partition: Not Supported 00:10:44.735 Memory Page Size Minimum: 4096 bytes 00:10:44.735 Memory Page Size Maximum: 65536 bytes 00:10:44.735 Persistent Memory Region: Not Supported 00:10:44.735 Optional Asynchronous Events Supported 00:10:44.735 Namespace Attribute Notices: Supported 00:10:44.735 Firmware Activation Notices: Not Supported 00:10:44.735 ANA Change Notices: Not Supported 00:10:44.735 PLE Aggregate Log Change Notices: Not Supported 00:10:44.735 LBA Status Info Alert Notices: Not Supported 00:10:44.736 EGE Aggregate Log Change Notices: Not Supported 00:10:44.736 Normal NVM Subsystem Shutdown event: Not Supported 00:10:44.736 Zone Descriptor Change Notices: Not Supported 00:10:44.736 Discovery Log Change Notices: Not Supported 00:10:44.736 Controller Attributes 00:10:44.736 128-bit Host Identifier: Not Supported 00:10:44.736 Non-Operational Permissive Mode: Not Supported 00:10:44.736 NVM Sets: Not Supported 00:10:44.736 Read Recovery Levels: Not Supported 00:10:44.736 Endurance Groups: Not Supported 00:10:44.736 Predictable Latency Mode: Not Supported 00:10:44.736 Traffic Based Keep ALive: Not Supported 00:10:44.736 Namespace Granularity: Not Supported 00:10:44.736 SQ Associations: Not Supported 00:10:44.736 UUID List: Not Supported 00:10:44.736 Multi-Domain Subsystem: Not Supported 00:10:44.736 Fixed Capacity Management: Not Supported 00:10:44.736 Variable Capacity Management: Not Supported 00:10:44.736 Delete Endurance Group: Not Supported 00:10:44.736 Delete NVM Set: Not Supported 00:10:44.736 Extended LBA Formats Supported: Supported 00:10:44.736 Flexible Data Placement Supported: Not Supported 00:10:44.736 00:10:44.736 Controller Memory Buffer Support 00:10:44.736 ================================ 00:10:44.736 Supported: No 00:10:44.736 00:10:44.736 Persistent Memory Region Support 00:10:44.736 ================================ 00:10:44.736 Supported: No 00:10:44.736 00:10:44.736 Admin Command Set Attributes 00:10:44.736 ============================ 00:10:44.736 Security Send/Receive: Not Supported 00:10:44.736 Format NVM: Supported 00:10:44.736 Firmware Activate/Download: Not Supported 00:10:44.736 Namespace Management: Supported 00:10:44.736 Device Self-Test: Not Supported 00:10:44.736 Directives: Supported 00:10:44.736 NVMe-MI: Not Supported 00:10:44.736 Virtualization Management: Not Supported 00:10:44.736 Doorbell Buffer Config: Supported 00:10:44.736 Get LBA Status Capability: Not Supported 00:10:44.736 Command & Feature Lockdown Capability: Not Supported 00:10:44.736 Abort Command Limit: 4 00:10:44.736 Async Event Request Limit: 4 00:10:44.736 Number of Firmware Slots: N/A 00:10:44.736 Firmware Slot 1 Read-Only: N/A 00:10:44.736 Firmware Activation Without Reset: N/A 00:10:44.736 Multiple Update Detection Support: N/A 00:10:44.736 Firmware Update Granularity: No Information Provided 00:10:44.736 Per-Namespace SMART Log: Yes 00:10:44.736 Asymmetric Namespace Access Log Page: Not Supported 00:10:44.736 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:44.736 Command Effects Log Page: Supported 00:10:44.736 Get Log Page Extended Data: Supported 00:10:44.736 Telemetry Log Pages: Not Supported 00:10:44.736 Persistent Event Log Pages: Not Supported 00:10:44.736 Supported Log Pages Log Page: May Support 00:10:44.736 Commands Supported & Effects Log Page: Not Supported 00:10:44.736 Feature Identifiers & Effects Log Page:May Support 00:10:44.736 NVMe-MI Commands & Effects Log Page: May Support 00:10:44.736 Data Area 4 for Telemetry Log: Not Supported 00:10:44.736 Error Log Page Entries Supported: 1 00:10:44.736 Keep Alive: Not Supported 00:10:44.736 00:10:44.736 NVM Command Set Attributes 00:10:44.736 ========================== 00:10:44.736 Submission Queue Entry Size 00:10:44.736 Max: 64 00:10:44.736 Min: 64 00:10:44.736 Completion Queue Entry Size 00:10:44.736 Max: 16 00:10:44.736 Min: 16 00:10:44.736 Number of Namespaces: 256 00:10:44.736 Compare Command: Supported 00:10:44.736 Write Uncorrectable Command: Not Supported 00:10:44.736 Dataset Management Command: Supported 00:10:44.736 Write Zeroes Command: Supported 00:10:44.736 Set Features Save Field: Supported 00:10:44.736 Reservations: Not Supported 00:10:44.736 Timestamp: Supported 00:10:44.736 Copy: Supported 00:10:44.736 Volatile Write Cache: Present 00:10:44.736 Atomic Write Unit (Normal): 1 00:10:44.736 Atomic Write Unit (PFail): 1 00:10:44.736 Atomic Compare & Write Unit: 1 00:10:44.736 Fused Compare & Write: Not Supported 00:10:44.736 Scatter-Gather List 00:10:44.736 SGL Command Set: Supported 00:10:44.736 SGL Keyed: Not Supported 00:10:44.736 SGL Bit Bucket Descriptor: Not Supported 00:10:44.736 SGL Metadata Pointer: Not Supported 00:10:44.736 Oversized SGL: Not Supported 00:10:44.736 SGL Metadata Address: Not Supported 00:10:44.736 SGL Offset: Not Supported 00:10:44.736 Transport SGL Data Block: Not Supported 00:10:44.736 Replay Protected Memory Block: Not Supported 00:10:44.736 00:10:44.736 Firmware Slot Information 00:10:44.736 ========================= 00:10:44.736 Active slot: 1 00:10:44.736 Slot 1 Firmware Revision: 1.0 00:10:44.736 00:10:44.736 00:10:44.736 Commands Supported and Effects 00:10:44.736 ============================== 00:10:44.736 Admin Commands 00:10:44.736 -------------- 00:10:44.736 Delete I/O Submission Queue (00h): Supported 00:10:44.736 Create I/O Submission Queue (01h): Supported 00:10:44.736 Get Log Page (02h): Supported 00:10:44.736 Delete I/O Completion Queue (04h): Supported 00:10:44.736 Create I/O Completion Queue (05h): Supported 00:10:44.736 Identify (06h): Supported 00:10:44.736 Abort (08h): Supported 00:10:44.736 Set Features (09h): Supported 00:10:44.736 Get Features (0Ah): Supported 00:10:44.736 Asynchronous Event Request (0Ch): Supported 00:10:44.736 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:44.736 Directive Send (19h): Supported 00:10:44.736 Directive Receive (1Ah): Supported 00:10:44.736 Virtualization Management (1Ch): Supported 00:10:44.736 Doorbell Buffer Config (7Ch): Supported 00:10:44.736 Format NVM (80h): Supported LBA-Change 00:10:44.736 I/O Commands 00:10:44.736 ------------ 00:10:44.736 Flush (00h): Supported LBA-Change 00:10:44.736 Write (01h): Supported LBA-Change 00:10:44.736 Read (02h): Supported 00:10:44.736 Compare (05h): Supported 00:10:44.736 Write Zeroes (08h): Supported LBA-Change 00:10:44.736 Dataset Management (09h): Supported LBA-Change 00:10:44.736 Unknown (0Ch): Supported 00:10:44.736 Unknown (12h): Supported 00:10:44.736 Copy (19h): Supported LBA-Change 00:10:44.736 Unknown (1Dh): Supported LBA-Change 00:10:44.736 00:10:44.736 Error Log 00:10:44.736 ========= 00:10:44.736 00:10:44.736 Arbitration 00:10:44.736 =========== 00:10:44.736 Arbitration Burst: no limit 00:10:44.736 00:10:44.736 Power Management 00:10:44.736 ================ 00:10:44.736 Number of Power States: 1 00:10:44.736 Current Power State: Power State #0 00:10:44.736 Power State #0: 00:10:44.736 Max Power: 25.00 W 00:10:44.736 Non-Operational State: Operational 00:10:44.736 Entry Latency: 16 microseconds 00:10:44.736 Exit Latency: 4 microseconds 00:10:44.736 Relative Read Throughput: 0 00:10:44.736 Relative Read Latency: 0 00:10:44.736 Relative Write Throughput: 0 00:10:44.736 Relative Write Latency: 0 00:10:44.736 Idle Power: Not Reported 00:10:44.736 Active Power: Not Reported 00:10:44.736 Non-Operational Permissive Mode: Not Supported 00:10:44.736 00:10:44.736 Health Information 00:10:44.736 ================== 00:10:44.736 Critical Warnings: 00:10:44.736 Available Spare Space: OK 00:10:44.736 Temperature: OK 00:10:44.736 Device Reliability: OK 00:10:44.736 Read Only: No 00:10:44.736 Volatile Memory Backup: OK 00:10:44.736 Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.736 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:44.736 Available Spare: 0% 00:10:44.736 Available Spare Threshold: 0% 00:10:44.736 Life Percentage Used: 0% 00:10:44.736 Data Units Read: 773 00:10:44.736 Data Units Written: 664 00:10:44.736 Host Read Commands: 34297 00:10:44.736 Host Write Commands: 33335 00:10:44.736 Controller Busy Time: 0 minutes 00:10:44.736 Power Cycles: 0 00:10:44.736 Power On Hours: 0 hours 00:10:44.736 Unsafe Shutdowns: 0 00:10:44.736 Unrecoverable Media Errors: 0 00:10:44.736 Lifetime Error Log Entries: 0 00:10:44.736 Warning Temperature Time: 0 minutes 00:10:44.736 Critical Temperature Time: 0 minutes 00:10:44.736 00:10:44.736 Number of Queues 00:10:44.736 ================ 00:10:44.736 Number of I/O Submission Queues: 64 00:10:44.736 Number of I/O Completion Queues: 64 00:10:44.736 00:10:44.736 ZNS Specific Controller Data 00:10:44.736 ============================ 00:10:44.736 Zone Append Size Limit: 0 00:10:44.736 00:10:44.736 00:10:44.736 Active Namespaces 00:10:44.736 ================= 00:10:44.736 Namespace ID:1 00:10:44.736 Error Recovery Timeout: Unlimited 00:10:44.736 Command Set Identifier: NVM (00h) 00:10:44.736 Deallocate: Supported 00:10:44.736 Deallocated/Unwritten Error: Supported 00:10:44.736 Deallocated Read Value: All 0x00 00:10:44.736 Deallocate in Write Zeroes: Not Supported 00:10:44.736 Deallocated Guard Field: 0xFFFF 00:10:44.736 Flush: Supported 00:10:44.736 Reservation: Not Supported 00:10:44.736 Metadata Transferred as: Separate Metadata Buffer 00:10:44.736 Namespace Sharing Capabilities: Private 00:10:44.736 Size (in LBAs): 1548666 (5GiB) 00:10:44.736 Capacity (in LBAs): 1548666 (5GiB) 00:10:44.736 Utilization (in LBAs): 1548666 (5GiB) 00:10:44.736 Thin Provisioning: Not Supported 00:10:44.736 Per-NS Atomic Units: No 00:10:44.736 Maximum Single Source Range Length: 128 00:10:44.736 Maximum Copy Length: 128 00:10:44.736 Maximum Source Range Count: 128 00:10:44.736 NGUID/EUI64 Never Reused: No 00:10:44.736 Namespace Write Protected: No 00:10:44.736 Number of LBA Formats: 8 00:10:44.736 Current LBA Format: LBA Format #07 00:10:44.736 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:44.736 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:44.736 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:44.736 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:44.736 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:44.736 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:44.736 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:44.736 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:44.736 00:10:44.736 NVM Specific Namespace Data 00:10:44.736 =========================== 00:10:44.736 Logical Block Storage Tag Mask: 0 00:10:44.736 Protection Information Capabilities: 00:10:44.736 16b Guard Protection Information Storage Tag Support: No 00:10:44.736 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:44.736 Storage Tag Check Read Support: No 00:10:44.736 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.736 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.736 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.736 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.736 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.736 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.736 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.736 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.736 15:06:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:44.736 15:06:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:44.995 ===================================================== 00:10:44.995 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:44.995 ===================================================== 00:10:44.995 Controller Capabilities/Features 00:10:44.995 ================================ 00:10:44.995 Vendor ID: 1b36 00:10:44.995 Subsystem Vendor ID: 1af4 00:10:44.995 Serial Number: 12341 00:10:44.995 Model Number: QEMU NVMe Ctrl 00:10:44.995 Firmware Version: 8.0.0 00:10:44.995 Recommended Arb Burst: 6 00:10:44.995 IEEE OUI Identifier: 00 54 52 00:10:44.995 Multi-path I/O 00:10:44.995 May have multiple subsystem ports: No 00:10:44.995 May have multiple controllers: No 00:10:44.995 Associated with SR-IOV VF: No 00:10:44.995 Max Data Transfer Size: 524288 00:10:44.995 Max Number of Namespaces: 256 00:10:44.995 Max Number of I/O Queues: 64 00:10:44.995 NVMe Specification Version (VS): 1.4 00:10:44.995 NVMe Specification Version (Identify): 1.4 00:10:44.995 Maximum Queue Entries: 2048 00:10:44.995 Contiguous Queues Required: Yes 00:10:44.995 Arbitration Mechanisms Supported 00:10:44.995 Weighted Round Robin: Not Supported 00:10:44.995 Vendor Specific: Not Supported 00:10:44.995 Reset Timeout: 7500 ms 00:10:44.995 Doorbell Stride: 4 bytes 00:10:44.995 NVM Subsystem Reset: Not Supported 00:10:44.995 Command Sets Supported 00:10:44.995 NVM Command Set: Supported 00:10:44.995 Boot Partition: Not Supported 00:10:44.995 Memory Page Size Minimum: 4096 bytes 00:10:44.995 Memory Page Size Maximum: 65536 bytes 00:10:44.995 Persistent Memory Region: Not Supported 00:10:44.995 Optional Asynchronous Events Supported 00:10:44.995 Namespace Attribute Notices: Supported 00:10:44.995 Firmware Activation Notices: Not Supported 00:10:44.995 ANA Change Notices: Not Supported 00:10:44.995 PLE Aggregate Log Change Notices: Not Supported 00:10:44.995 LBA Status Info Alert Notices: Not Supported 00:10:44.995 EGE Aggregate Log Change Notices: Not Supported 00:10:44.995 Normal NVM Subsystem Shutdown event: Not Supported 00:10:44.995 Zone Descriptor Change Notices: Not Supported 00:10:44.995 Discovery Log Change Notices: Not Supported 00:10:44.995 Controller Attributes 00:10:44.995 128-bit Host Identifier: Not Supported 00:10:44.995 Non-Operational Permissive Mode: Not Supported 00:10:44.995 NVM Sets: Not Supported 00:10:44.995 Read Recovery Levels: Not Supported 00:10:44.995 Endurance Groups: Not Supported 00:10:44.995 Predictable Latency Mode: Not Supported 00:10:44.995 Traffic Based Keep ALive: Not Supported 00:10:44.995 Namespace Granularity: Not Supported 00:10:44.995 SQ Associations: Not Supported 00:10:44.995 UUID List: Not Supported 00:10:44.995 Multi-Domain Subsystem: Not Supported 00:10:44.995 Fixed Capacity Management: Not Supported 00:10:44.995 Variable Capacity Management: Not Supported 00:10:44.995 Delete Endurance Group: Not Supported 00:10:44.995 Delete NVM Set: Not Supported 00:10:44.995 Extended LBA Formats Supported: Supported 00:10:44.995 Flexible Data Placement Supported: Not Supported 00:10:44.995 00:10:44.995 Controller Memory Buffer Support 00:10:44.995 ================================ 00:10:44.995 Supported: No 00:10:44.995 00:10:44.995 Persistent Memory Region Support 00:10:44.995 ================================ 00:10:44.995 Supported: No 00:10:44.995 00:10:44.995 Admin Command Set Attributes 00:10:44.995 ============================ 00:10:44.995 Security Send/Receive: Not Supported 00:10:44.995 Format NVM: Supported 00:10:44.995 Firmware Activate/Download: Not Supported 00:10:44.995 Namespace Management: Supported 00:10:44.995 Device Self-Test: Not Supported 00:10:44.995 Directives: Supported 00:10:44.995 NVMe-MI: Not Supported 00:10:44.995 Virtualization Management: Not Supported 00:10:44.995 Doorbell Buffer Config: Supported 00:10:44.995 Get LBA Status Capability: Not Supported 00:10:44.995 Command & Feature Lockdown Capability: Not Supported 00:10:44.995 Abort Command Limit: 4 00:10:44.995 Async Event Request Limit: 4 00:10:44.995 Number of Firmware Slots: N/A 00:10:44.995 Firmware Slot 1 Read-Only: N/A 00:10:44.995 Firmware Activation Without Reset: N/A 00:10:44.995 Multiple Update Detection Support: N/A 00:10:44.995 Firmware Update Granularity: No Information Provided 00:10:44.995 Per-Namespace SMART Log: Yes 00:10:44.995 Asymmetric Namespace Access Log Page: Not Supported 00:10:44.995 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:44.995 Command Effects Log Page: Supported 00:10:44.995 Get Log Page Extended Data: Supported 00:10:44.995 Telemetry Log Pages: Not Supported 00:10:44.995 Persistent Event Log Pages: Not Supported 00:10:44.995 Supported Log Pages Log Page: May Support 00:10:44.995 Commands Supported & Effects Log Page: Not Supported 00:10:44.995 Feature Identifiers & Effects Log Page:May Support 00:10:44.995 NVMe-MI Commands & Effects Log Page: May Support 00:10:44.995 Data Area 4 for Telemetry Log: Not Supported 00:10:44.995 Error Log Page Entries Supported: 1 00:10:44.995 Keep Alive: Not Supported 00:10:44.995 00:10:44.995 NVM Command Set Attributes 00:10:44.995 ========================== 00:10:44.995 Submission Queue Entry Size 00:10:44.995 Max: 64 00:10:44.995 Min: 64 00:10:44.995 Completion Queue Entry Size 00:10:44.995 Max: 16 00:10:44.995 Min: 16 00:10:44.995 Number of Namespaces: 256 00:10:44.996 Compare Command: Supported 00:10:44.996 Write Uncorrectable Command: Not Supported 00:10:44.996 Dataset Management Command: Supported 00:10:44.996 Write Zeroes Command: Supported 00:10:44.996 Set Features Save Field: Supported 00:10:44.996 Reservations: Not Supported 00:10:44.996 Timestamp: Supported 00:10:44.996 Copy: Supported 00:10:44.996 Volatile Write Cache: Present 00:10:44.996 Atomic Write Unit (Normal): 1 00:10:44.996 Atomic Write Unit (PFail): 1 00:10:44.996 Atomic Compare & Write Unit: 1 00:10:44.996 Fused Compare & Write: Not Supported 00:10:44.996 Scatter-Gather List 00:10:44.996 SGL Command Set: Supported 00:10:44.996 SGL Keyed: Not Supported 00:10:44.996 SGL Bit Bucket Descriptor: Not Supported 00:10:44.996 SGL Metadata Pointer: Not Supported 00:10:44.996 Oversized SGL: Not Supported 00:10:44.996 SGL Metadata Address: Not Supported 00:10:44.996 SGL Offset: Not Supported 00:10:44.996 Transport SGL Data Block: Not Supported 00:10:44.996 Replay Protected Memory Block: Not Supported 00:10:44.996 00:10:44.996 Firmware Slot Information 00:10:44.996 ========================= 00:10:44.996 Active slot: 1 00:10:44.996 Slot 1 Firmware Revision: 1.0 00:10:44.996 00:10:44.996 00:10:44.996 Commands Supported and Effects 00:10:44.996 ============================== 00:10:44.996 Admin Commands 00:10:44.996 -------------- 00:10:44.996 Delete I/O Submission Queue (00h): Supported 00:10:44.996 Create I/O Submission Queue (01h): Supported 00:10:44.996 Get Log Page (02h): Supported 00:10:44.996 Delete I/O Completion Queue (04h): Supported 00:10:44.996 Create I/O Completion Queue (05h): Supported 00:10:44.996 Identify (06h): Supported 00:10:44.996 Abort (08h): Supported 00:10:44.996 Set Features (09h): Supported 00:10:44.996 Get Features (0Ah): Supported 00:10:44.996 Asynchronous Event Request (0Ch): Supported 00:10:44.996 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:44.996 Directive Send (19h): Supported 00:10:44.996 Directive Receive (1Ah): Supported 00:10:44.996 Virtualization Management (1Ch): Supported 00:10:44.996 Doorbell Buffer Config (7Ch): Supported 00:10:44.996 Format NVM (80h): Supported LBA-Change 00:10:44.996 I/O Commands 00:10:44.996 ------------ 00:10:44.996 Flush (00h): Supported LBA-Change 00:10:44.996 Write (01h): Supported LBA-Change 00:10:44.996 Read (02h): Supported 00:10:44.996 Compare (05h): Supported 00:10:44.996 Write Zeroes (08h): Supported LBA-Change 00:10:44.996 Dataset Management (09h): Supported LBA-Change 00:10:44.996 Unknown (0Ch): Supported 00:10:44.996 Unknown (12h): Supported 00:10:44.996 Copy (19h): Supported LBA-Change 00:10:44.996 Unknown (1Dh): Supported LBA-Change 00:10:44.996 00:10:44.996 Error Log 00:10:44.996 ========= 00:10:44.996 00:10:44.996 Arbitration 00:10:44.996 =========== 00:10:44.996 Arbitration Burst: no limit 00:10:44.996 00:10:44.996 Power Management 00:10:44.996 ================ 00:10:44.996 Number of Power States: 1 00:10:44.996 Current Power State: Power State #0 00:10:44.996 Power State #0: 00:10:44.996 Max Power: 25.00 W 00:10:44.996 Non-Operational State: Operational 00:10:44.996 Entry Latency: 16 microseconds 00:10:44.996 Exit Latency: 4 microseconds 00:10:44.996 Relative Read Throughput: 0 00:10:44.996 Relative Read Latency: 0 00:10:44.996 Relative Write Throughput: 0 00:10:44.996 Relative Write Latency: 0 00:10:44.996 Idle Power: Not Reported 00:10:44.996 Active Power: Not Reported 00:10:44.996 Non-Operational Permissive Mode: Not Supported 00:10:44.996 00:10:44.996 Health Information 00:10:44.996 ================== 00:10:44.996 Critical Warnings: 00:10:44.996 Available Spare Space: OK 00:10:44.996 Temperature: OK 00:10:44.996 Device Reliability: OK 00:10:44.996 Read Only: No 00:10:44.996 Volatile Memory Backup: OK 00:10:44.996 Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.996 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:44.996 Available Spare: 0% 00:10:44.996 Available Spare Threshold: 0% 00:10:44.996 Life Percentage Used: 0% 00:10:44.996 Data Units Read: 1196 00:10:44.996 Data Units Written: 983 00:10:44.996 Host Read Commands: 50071 00:10:44.996 Host Write Commands: 47168 00:10:44.996 Controller Busy Time: 0 minutes 00:10:44.996 Power Cycles: 0 00:10:44.996 Power On Hours: 0 hours 00:10:44.996 Unsafe Shutdowns: 0 00:10:44.996 Unrecoverable Media Errors: 0 00:10:44.996 Lifetime Error Log Entries: 0 00:10:44.996 Warning Temperature Time: 0 minutes 00:10:44.996 Critical Temperature Time: 0 minutes 00:10:44.996 00:10:44.996 Number of Queues 00:10:44.996 ================ 00:10:44.996 Number of I/O Submission Queues: 64 00:10:44.996 Number of I/O Completion Queues: 64 00:10:44.996 00:10:44.996 ZNS Specific Controller Data 00:10:44.996 ============================ 00:10:44.996 Zone Append Size Limit: 0 00:10:44.996 00:10:44.996 00:10:44.996 Active Namespaces 00:10:44.996 ================= 00:10:44.996 Namespace ID:1 00:10:44.996 Error Recovery Timeout: Unlimited 00:10:44.996 Command Set Identifier: NVM (00h) 00:10:44.996 Deallocate: Supported 00:10:44.996 Deallocated/Unwritten Error: Supported 00:10:44.996 Deallocated Read Value: All 0x00 00:10:44.996 Deallocate in Write Zeroes: Not Supported 00:10:44.996 Deallocated Guard Field: 0xFFFF 00:10:44.996 Flush: Supported 00:10:44.996 Reservation: Not Supported 00:10:44.997 Namespace Sharing Capabilities: Private 00:10:44.997 Size (in LBAs): 1310720 (5GiB) 00:10:44.997 Capacity (in LBAs): 1310720 (5GiB) 00:10:44.997 Utilization (in LBAs): 1310720 (5GiB) 00:10:44.997 Thin Provisioning: Not Supported 00:10:44.997 Per-NS Atomic Units: No 00:10:44.997 Maximum Single Source Range Length: 128 00:10:44.997 Maximum Copy Length: 128 00:10:44.997 Maximum Source Range Count: 128 00:10:44.997 NGUID/EUI64 Never Reused: No 00:10:44.997 Namespace Write Protected: No 00:10:44.997 Number of LBA Formats: 8 00:10:44.997 Current LBA Format: LBA Format #04 00:10:44.997 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:44.997 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:44.997 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:44.997 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:44.997 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:44.997 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:44.997 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:44.997 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:44.997 00:10:44.997 NVM Specific Namespace Data 00:10:44.997 =========================== 00:10:44.997 Logical Block Storage Tag Mask: 0 00:10:44.997 Protection Information Capabilities: 00:10:44.997 16b Guard Protection Information Storage Tag Support: No 00:10:44.997 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:44.997 Storage Tag Check Read Support: No 00:10:44.997 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.997 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.997 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.997 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.997 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.997 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.997 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.997 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.997 15:06:23 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:44.997 15:06:23 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:45.256 ===================================================== 00:10:45.256 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:45.256 ===================================================== 00:10:45.256 Controller Capabilities/Features 00:10:45.256 ================================ 00:10:45.256 Vendor ID: 1b36 00:10:45.256 Subsystem Vendor ID: 1af4 00:10:45.256 Serial Number: 12342 00:10:45.256 Model Number: QEMU NVMe Ctrl 00:10:45.256 Firmware Version: 8.0.0 00:10:45.256 Recommended Arb Burst: 6 00:10:45.256 IEEE OUI Identifier: 00 54 52 00:10:45.256 Multi-path I/O 00:10:45.256 May have multiple subsystem ports: No 00:10:45.256 May have multiple controllers: No 00:10:45.256 Associated with SR-IOV VF: No 00:10:45.256 Max Data Transfer Size: 524288 00:10:45.256 Max Number of Namespaces: 256 00:10:45.256 Max Number of I/O Queues: 64 00:10:45.256 NVMe Specification Version (VS): 1.4 00:10:45.256 NVMe Specification Version (Identify): 1.4 00:10:45.256 Maximum Queue Entries: 2048 00:10:45.256 Contiguous Queues Required: Yes 00:10:45.256 Arbitration Mechanisms Supported 00:10:45.256 Weighted Round Robin: Not Supported 00:10:45.256 Vendor Specific: Not Supported 00:10:45.256 Reset Timeout: 7500 ms 00:10:45.256 Doorbell Stride: 4 bytes 00:10:45.256 NVM Subsystem Reset: Not Supported 00:10:45.256 Command Sets Supported 00:10:45.256 NVM Command Set: Supported 00:10:45.256 Boot Partition: Not Supported 00:10:45.256 Memory Page Size Minimum: 4096 bytes 00:10:45.256 Memory Page Size Maximum: 65536 bytes 00:10:45.256 Persistent Memory Region: Not Supported 00:10:45.256 Optional Asynchronous Events Supported 00:10:45.256 Namespace Attribute Notices: Supported 00:10:45.256 Firmware Activation Notices: Not Supported 00:10:45.256 ANA Change Notices: Not Supported 00:10:45.256 PLE Aggregate Log Change Notices: Not Supported 00:10:45.256 LBA Status Info Alert Notices: Not Supported 00:10:45.256 EGE Aggregate Log Change Notices: Not Supported 00:10:45.256 Normal NVM Subsystem Shutdown event: Not Supported 00:10:45.256 Zone Descriptor Change Notices: Not Supported 00:10:45.256 Discovery Log Change Notices: Not Supported 00:10:45.256 Controller Attributes 00:10:45.256 128-bit Host Identifier: Not Supported 00:10:45.256 Non-Operational Permissive Mode: Not Supported 00:10:45.256 NVM Sets: Not Supported 00:10:45.256 Read Recovery Levels: Not Supported 00:10:45.256 Endurance Groups: Not Supported 00:10:45.256 Predictable Latency Mode: Not Supported 00:10:45.256 Traffic Based Keep ALive: Not Supported 00:10:45.256 Namespace Granularity: Not Supported 00:10:45.256 SQ Associations: Not Supported 00:10:45.256 UUID List: Not Supported 00:10:45.256 Multi-Domain Subsystem: Not Supported 00:10:45.256 Fixed Capacity Management: Not Supported 00:10:45.256 Variable Capacity Management: Not Supported 00:10:45.256 Delete Endurance Group: Not Supported 00:10:45.256 Delete NVM Set: Not Supported 00:10:45.256 Extended LBA Formats Supported: Supported 00:10:45.256 Flexible Data Placement Supported: Not Supported 00:10:45.256 00:10:45.256 Controller Memory Buffer Support 00:10:45.256 ================================ 00:10:45.256 Supported: No 00:10:45.256 00:10:45.256 Persistent Memory Region Support 00:10:45.256 ================================ 00:10:45.256 Supported: No 00:10:45.256 00:10:45.256 Admin Command Set Attributes 00:10:45.256 ============================ 00:10:45.256 Security Send/Receive: Not Supported 00:10:45.257 Format NVM: Supported 00:10:45.257 Firmware Activate/Download: Not Supported 00:10:45.257 Namespace Management: Supported 00:10:45.257 Device Self-Test: Not Supported 00:10:45.257 Directives: Supported 00:10:45.257 NVMe-MI: Not Supported 00:10:45.257 Virtualization Management: Not Supported 00:10:45.257 Doorbell Buffer Config: Supported 00:10:45.257 Get LBA Status Capability: Not Supported 00:10:45.257 Command & Feature Lockdown Capability: Not Supported 00:10:45.257 Abort Command Limit: 4 00:10:45.257 Async Event Request Limit: 4 00:10:45.257 Number of Firmware Slots: N/A 00:10:45.257 Firmware Slot 1 Read-Only: N/A 00:10:45.257 Firmware Activation Without Reset: N/A 00:10:45.257 Multiple Update Detection Support: N/A 00:10:45.257 Firmware Update Granularity: No Information Provided 00:10:45.257 Per-Namespace SMART Log: Yes 00:10:45.257 Asymmetric Namespace Access Log Page: Not Supported 00:10:45.257 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:45.257 Command Effects Log Page: Supported 00:10:45.257 Get Log Page Extended Data: Supported 00:10:45.257 Telemetry Log Pages: Not Supported 00:10:45.257 Persistent Event Log Pages: Not Supported 00:10:45.257 Supported Log Pages Log Page: May Support 00:10:45.257 Commands Supported & Effects Log Page: Not Supported 00:10:45.257 Feature Identifiers & Effects Log Page:May Support 00:10:45.257 NVMe-MI Commands & Effects Log Page: May Support 00:10:45.257 Data Area 4 for Telemetry Log: Not Supported 00:10:45.257 Error Log Page Entries Supported: 1 00:10:45.257 Keep Alive: Not Supported 00:10:45.257 00:10:45.257 NVM Command Set Attributes 00:10:45.257 ========================== 00:10:45.257 Submission Queue Entry Size 00:10:45.257 Max: 64 00:10:45.257 Min: 64 00:10:45.257 Completion Queue Entry Size 00:10:45.257 Max: 16 00:10:45.257 Min: 16 00:10:45.257 Number of Namespaces: 256 00:10:45.257 Compare Command: Supported 00:10:45.257 Write Uncorrectable Command: Not Supported 00:10:45.257 Dataset Management Command: Supported 00:10:45.257 Write Zeroes Command: Supported 00:10:45.257 Set Features Save Field: Supported 00:10:45.257 Reservations: Not Supported 00:10:45.257 Timestamp: Supported 00:10:45.257 Copy: Supported 00:10:45.257 Volatile Write Cache: Present 00:10:45.257 Atomic Write Unit (Normal): 1 00:10:45.257 Atomic Write Unit (PFail): 1 00:10:45.257 Atomic Compare & Write Unit: 1 00:10:45.257 Fused Compare & Write: Not Supported 00:10:45.257 Scatter-Gather List 00:10:45.257 SGL Command Set: Supported 00:10:45.257 SGL Keyed: Not Supported 00:10:45.257 SGL Bit Bucket Descriptor: Not Supported 00:10:45.257 SGL Metadata Pointer: Not Supported 00:10:45.257 Oversized SGL: Not Supported 00:10:45.257 SGL Metadata Address: Not Supported 00:10:45.257 SGL Offset: Not Supported 00:10:45.257 Transport SGL Data Block: Not Supported 00:10:45.257 Replay Protected Memory Block: Not Supported 00:10:45.257 00:10:45.257 Firmware Slot Information 00:10:45.257 ========================= 00:10:45.257 Active slot: 1 00:10:45.257 Slot 1 Firmware Revision: 1.0 00:10:45.257 00:10:45.257 00:10:45.257 Commands Supported and Effects 00:10:45.257 ============================== 00:10:45.257 Admin Commands 00:10:45.257 -------------- 00:10:45.257 Delete I/O Submission Queue (00h): Supported 00:10:45.257 Create I/O Submission Queue (01h): Supported 00:10:45.257 Get Log Page (02h): Supported 00:10:45.257 Delete I/O Completion Queue (04h): Supported 00:10:45.257 Create I/O Completion Queue (05h): Supported 00:10:45.257 Identify (06h): Supported 00:10:45.257 Abort (08h): Supported 00:10:45.257 Set Features (09h): Supported 00:10:45.257 Get Features (0Ah): Supported 00:10:45.257 Asynchronous Event Request (0Ch): Supported 00:10:45.257 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:45.257 Directive Send (19h): Supported 00:10:45.257 Directive Receive (1Ah): Supported 00:10:45.257 Virtualization Management (1Ch): Supported 00:10:45.257 Doorbell Buffer Config (7Ch): Supported 00:10:45.257 Format NVM (80h): Supported LBA-Change 00:10:45.257 I/O Commands 00:10:45.257 ------------ 00:10:45.257 Flush (00h): Supported LBA-Change 00:10:45.257 Write (01h): Supported LBA-Change 00:10:45.257 Read (02h): Supported 00:10:45.257 Compare (05h): Supported 00:10:45.257 Write Zeroes (08h): Supported LBA-Change 00:10:45.257 Dataset Management (09h): Supported LBA-Change 00:10:45.257 Unknown (0Ch): Supported 00:10:45.257 Unknown (12h): Supported 00:10:45.257 Copy (19h): Supported LBA-Change 00:10:45.257 Unknown (1Dh): Supported LBA-Change 00:10:45.257 00:10:45.257 Error Log 00:10:45.257 ========= 00:10:45.257 00:10:45.257 Arbitration 00:10:45.257 =========== 00:10:45.257 Arbitration Burst: no limit 00:10:45.257 00:10:45.257 Power Management 00:10:45.257 ================ 00:10:45.257 Number of Power States: 1 00:10:45.257 Current Power State: Power State #0 00:10:45.257 Power State #0: 00:10:45.257 Max Power: 25.00 W 00:10:45.257 Non-Operational State: Operational 00:10:45.257 Entry Latency: 16 microseconds 00:10:45.257 Exit Latency: 4 microseconds 00:10:45.257 Relative Read Throughput: 0 00:10:45.257 Relative Read Latency: 0 00:10:45.257 Relative Write Throughput: 0 00:10:45.257 Relative Write Latency: 0 00:10:45.257 Idle Power: Not Reported 00:10:45.257 Active Power: Not Reported 00:10:45.257 Non-Operational Permissive Mode: Not Supported 00:10:45.257 00:10:45.257 Health Information 00:10:45.257 ================== 00:10:45.257 Critical Warnings: 00:10:45.257 Available Spare Space: OK 00:10:45.257 Temperature: OK 00:10:45.257 Device Reliability: OK 00:10:45.257 Read Only: No 00:10:45.257 Volatile Memory Backup: OK 00:10:45.257 Current Temperature: 323 Kelvin (50 Celsius) 00:10:45.257 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:45.257 Available Spare: 0% 00:10:45.257 Available Spare Threshold: 0% 00:10:45.257 Life Percentage Used: 0% 00:10:45.257 Data Units Read: 2549 00:10:45.257 Data Units Written: 2230 00:10:45.257 Host Read Commands: 105821 00:10:45.257 Host Write Commands: 101591 00:10:45.257 Controller Busy Time: 0 minutes 00:10:45.257 Power Cycles: 0 00:10:45.257 Power On Hours: 0 hours 00:10:45.257 Unsafe Shutdowns: 0 00:10:45.257 Unrecoverable Media Errors: 0 00:10:45.257 Lifetime Error Log Entries: 0 00:10:45.257 Warning Temperature Time: 0 minutes 00:10:45.257 Critical Temperature Time: 0 minutes 00:10:45.257 00:10:45.257 Number of Queues 00:10:45.257 ================ 00:10:45.257 Number of I/O Submission Queues: 64 00:10:45.257 Number of I/O Completion Queues: 64 00:10:45.257 00:10:45.257 ZNS Specific Controller Data 00:10:45.257 ============================ 00:10:45.257 Zone Append Size Limit: 0 00:10:45.257 00:10:45.257 00:10:45.257 Active Namespaces 00:10:45.257 ================= 00:10:45.257 Namespace ID:1 00:10:45.257 Error Recovery Timeout: Unlimited 00:10:45.257 Command Set Identifier: NVM (00h) 00:10:45.257 Deallocate: Supported 00:10:45.257 Deallocated/Unwritten Error: Supported 00:10:45.257 Deallocated Read Value: All 0x00 00:10:45.257 Deallocate in Write Zeroes: Not Supported 00:10:45.257 Deallocated Guard Field: 0xFFFF 00:10:45.257 Flush: Supported 00:10:45.257 Reservation: Not Supported 00:10:45.257 Namespace Sharing Capabilities: Private 00:10:45.257 Size (in LBAs): 1048576 (4GiB) 00:10:45.257 Capacity (in LBAs): 1048576 (4GiB) 00:10:45.257 Utilization (in LBAs): 1048576 (4GiB) 00:10:45.257 Thin Provisioning: Not Supported 00:10:45.257 Per-NS Atomic Units: No 00:10:45.257 Maximum Single Source Range Length: 128 00:10:45.257 Maximum Copy Length: 128 00:10:45.257 Maximum Source Range Count: 128 00:10:45.257 NGUID/EUI64 Never Reused: No 00:10:45.257 Namespace Write Protected: No 00:10:45.257 Number of LBA Formats: 8 00:10:45.257 Current LBA Format: LBA Format #04 00:10:45.257 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:45.257 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:45.257 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:45.257 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:45.257 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:45.257 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:45.257 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:45.257 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:45.257 00:10:45.257 NVM Specific Namespace Data 00:10:45.257 =========================== 00:10:45.257 Logical Block Storage Tag Mask: 0 00:10:45.257 Protection Information Capabilities: 00:10:45.257 16b Guard Protection Information Storage Tag Support: No 00:10:45.257 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:45.257 Storage Tag Check Read Support: No 00:10:45.257 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Namespace ID:2 00:10:45.257 Error Recovery Timeout: Unlimited 00:10:45.257 Command Set Identifier: NVM (00h) 00:10:45.257 Deallocate: Supported 00:10:45.257 Deallocated/Unwritten Error: Supported 00:10:45.257 Deallocated Read Value: All 0x00 00:10:45.257 Deallocate in Write Zeroes: Not Supported 00:10:45.257 Deallocated Guard Field: 0xFFFF 00:10:45.257 Flush: Supported 00:10:45.257 Reservation: Not Supported 00:10:45.257 Namespace Sharing Capabilities: Private 00:10:45.257 Size (in LBAs): 1048576 (4GiB) 00:10:45.257 Capacity (in LBAs): 1048576 (4GiB) 00:10:45.257 Utilization (in LBAs): 1048576 (4GiB) 00:10:45.257 Thin Provisioning: Not Supported 00:10:45.257 Per-NS Atomic Units: No 00:10:45.257 Maximum Single Source Range Length: 128 00:10:45.257 Maximum Copy Length: 128 00:10:45.257 Maximum Source Range Count: 128 00:10:45.257 NGUID/EUI64 Never Reused: No 00:10:45.257 Namespace Write Protected: No 00:10:45.257 Number of LBA Formats: 8 00:10:45.257 Current LBA Format: LBA Format #04 00:10:45.257 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:45.257 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:45.257 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:45.257 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:45.257 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:45.257 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:45.257 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:45.257 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:45.257 00:10:45.257 NVM Specific Namespace Data 00:10:45.257 =========================== 00:10:45.257 Logical Block Storage Tag Mask: 0 00:10:45.257 Protection Information Capabilities: 00:10:45.257 16b Guard Protection Information Storage Tag Support: No 00:10:45.257 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:45.257 Storage Tag Check Read Support: No 00:10:45.257 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Namespace ID:3 00:10:45.257 Error Recovery Timeout: Unlimited 00:10:45.257 Command Set Identifier: NVM (00h) 00:10:45.257 Deallocate: Supported 00:10:45.257 Deallocated/Unwritten Error: Supported 00:10:45.257 Deallocated Read Value: All 0x00 00:10:45.257 Deallocate in Write Zeroes: Not Supported 00:10:45.257 Deallocated Guard Field: 0xFFFF 00:10:45.257 Flush: Supported 00:10:45.257 Reservation: Not Supported 00:10:45.257 Namespace Sharing Capabilities: Private 00:10:45.257 Size (in LBAs): 1048576 (4GiB) 00:10:45.257 Capacity (in LBAs): 1048576 (4GiB) 00:10:45.257 Utilization (in LBAs): 1048576 (4GiB) 00:10:45.257 Thin Provisioning: Not Supported 00:10:45.257 Per-NS Atomic Units: No 00:10:45.257 Maximum Single Source Range Length: 128 00:10:45.257 Maximum Copy Length: 128 00:10:45.257 Maximum Source Range Count: 128 00:10:45.257 NGUID/EUI64 Never Reused: No 00:10:45.257 Namespace Write Protected: No 00:10:45.257 Number of LBA Formats: 8 00:10:45.257 Current LBA Format: LBA Format #04 00:10:45.257 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:45.257 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:45.257 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:45.257 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:45.257 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:45.257 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:45.257 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:45.257 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:45.257 00:10:45.257 NVM Specific Namespace Data 00:10:45.257 =========================== 00:10:45.257 Logical Block Storage Tag Mask: 0 00:10:45.257 Protection Information Capabilities: 00:10:45.257 16b Guard Protection Information Storage Tag Support: No 00:10:45.257 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:45.257 Storage Tag Check Read Support: No 00:10:45.257 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.257 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.258 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.258 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.258 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.258 15:06:23 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:45.258 15:06:23 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:45.516 ===================================================== 00:10:45.517 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:45.517 ===================================================== 00:10:45.517 Controller Capabilities/Features 00:10:45.517 ================================ 00:10:45.517 Vendor ID: 1b36 00:10:45.517 Subsystem Vendor ID: 1af4 00:10:45.517 Serial Number: 12343 00:10:45.517 Model Number: QEMU NVMe Ctrl 00:10:45.517 Firmware Version: 8.0.0 00:10:45.517 Recommended Arb Burst: 6 00:10:45.517 IEEE OUI Identifier: 00 54 52 00:10:45.517 Multi-path I/O 00:10:45.517 May have multiple subsystem ports: No 00:10:45.517 May have multiple controllers: Yes 00:10:45.517 Associated with SR-IOV VF: No 00:10:45.517 Max Data Transfer Size: 524288 00:10:45.517 Max Number of Namespaces: 256 00:10:45.517 Max Number of I/O Queues: 64 00:10:45.517 NVMe Specification Version (VS): 1.4 00:10:45.517 NVMe Specification Version (Identify): 1.4 00:10:45.517 Maximum Queue Entries: 2048 00:10:45.517 Contiguous Queues Required: Yes 00:10:45.517 Arbitration Mechanisms Supported 00:10:45.517 Weighted Round Robin: Not Supported 00:10:45.517 Vendor Specific: Not Supported 00:10:45.517 Reset Timeout: 7500 ms 00:10:45.517 Doorbell Stride: 4 bytes 00:10:45.517 NVM Subsystem Reset: Not Supported 00:10:45.517 Command Sets Supported 00:10:45.517 NVM Command Set: Supported 00:10:45.517 Boot Partition: Not Supported 00:10:45.517 Memory Page Size Minimum: 4096 bytes 00:10:45.517 Memory Page Size Maximum: 65536 bytes 00:10:45.517 Persistent Memory Region: Not Supported 00:10:45.517 Optional Asynchronous Events Supported 00:10:45.517 Namespace Attribute Notices: Supported 00:10:45.517 Firmware Activation Notices: Not Supported 00:10:45.517 ANA Change Notices: Not Supported 00:10:45.517 PLE Aggregate Log Change Notices: Not Supported 00:10:45.517 LBA Status Info Alert Notices: Not Supported 00:10:45.517 EGE Aggregate Log Change Notices: Not Supported 00:10:45.517 Normal NVM Subsystem Shutdown event: Not Supported 00:10:45.517 Zone Descriptor Change Notices: Not Supported 00:10:45.517 Discovery Log Change Notices: Not Supported 00:10:45.517 Controller Attributes 00:10:45.517 128-bit Host Identifier: Not Supported 00:10:45.517 Non-Operational Permissive Mode: Not Supported 00:10:45.517 NVM Sets: Not Supported 00:10:45.517 Read Recovery Levels: Not Supported 00:10:45.517 Endurance Groups: Supported 00:10:45.517 Predictable Latency Mode: Not Supported 00:10:45.517 Traffic Based Keep ALive: Not Supported 00:10:45.517 Namespace Granularity: Not Supported 00:10:45.517 SQ Associations: Not Supported 00:10:45.517 UUID List: Not Supported 00:10:45.517 Multi-Domain Subsystem: Not Supported 00:10:45.517 Fixed Capacity Management: Not Supported 00:10:45.517 Variable Capacity Management: Not Supported 00:10:45.517 Delete Endurance Group: Not Supported 00:10:45.517 Delete NVM Set: Not Supported 00:10:45.517 Extended LBA Formats Supported: Supported 00:10:45.517 Flexible Data Placement Supported: Supported 00:10:45.517 00:10:45.517 Controller Memory Buffer Support 00:10:45.517 ================================ 00:10:45.517 Supported: No 00:10:45.517 00:10:45.517 Persistent Memory Region Support 00:10:45.517 ================================ 00:10:45.517 Supported: No 00:10:45.517 00:10:45.517 Admin Command Set Attributes 00:10:45.517 ============================ 00:10:45.517 Security Send/Receive: Not Supported 00:10:45.517 Format NVM: Supported 00:10:45.517 Firmware Activate/Download: Not Supported 00:10:45.517 Namespace Management: Supported 00:10:45.517 Device Self-Test: Not Supported 00:10:45.517 Directives: Supported 00:10:45.517 NVMe-MI: Not Supported 00:10:45.517 Virtualization Management: Not Supported 00:10:45.517 Doorbell Buffer Config: Supported 00:10:45.517 Get LBA Status Capability: Not Supported 00:10:45.517 Command & Feature Lockdown Capability: Not Supported 00:10:45.517 Abort Command Limit: 4 00:10:45.517 Async Event Request Limit: 4 00:10:45.517 Number of Firmware Slots: N/A 00:10:45.517 Firmware Slot 1 Read-Only: N/A 00:10:45.517 Firmware Activation Without Reset: N/A 00:10:45.517 Multiple Update Detection Support: N/A 00:10:45.517 Firmware Update Granularity: No Information Provided 00:10:45.517 Per-Namespace SMART Log: Yes 00:10:45.517 Asymmetric Namespace Access Log Page: Not Supported 00:10:45.517 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:45.517 Command Effects Log Page: Supported 00:10:45.517 Get Log Page Extended Data: Supported 00:10:45.517 Telemetry Log Pages: Not Supported 00:10:45.517 Persistent Event Log Pages: Not Supported 00:10:45.517 Supported Log Pages Log Page: May Support 00:10:45.517 Commands Supported & Effects Log Page: Not Supported 00:10:45.517 Feature Identifiers & Effects Log Page:May Support 00:10:45.517 NVMe-MI Commands & Effects Log Page: May Support 00:10:45.517 Data Area 4 for Telemetry Log: Not Supported 00:10:45.517 Error Log Page Entries Supported: 1 00:10:45.517 Keep Alive: Not Supported 00:10:45.517 00:10:45.517 NVM Command Set Attributes 00:10:45.517 ========================== 00:10:45.517 Submission Queue Entry Size 00:10:45.517 Max: 64 00:10:45.517 Min: 64 00:10:45.517 Completion Queue Entry Size 00:10:45.517 Max: 16 00:10:45.517 Min: 16 00:10:45.517 Number of Namespaces: 256 00:10:45.517 Compare Command: Supported 00:10:45.517 Write Uncorrectable Command: Not Supported 00:10:45.517 Dataset Management Command: Supported 00:10:45.517 Write Zeroes Command: Supported 00:10:45.517 Set Features Save Field: Supported 00:10:45.517 Reservations: Not Supported 00:10:45.517 Timestamp: Supported 00:10:45.517 Copy: Supported 00:10:45.517 Volatile Write Cache: Present 00:10:45.517 Atomic Write Unit (Normal): 1 00:10:45.517 Atomic Write Unit (PFail): 1 00:10:45.517 Atomic Compare & Write Unit: 1 00:10:45.517 Fused Compare & Write: Not Supported 00:10:45.517 Scatter-Gather List 00:10:45.517 SGL Command Set: Supported 00:10:45.517 SGL Keyed: Not Supported 00:10:45.517 SGL Bit Bucket Descriptor: Not Supported 00:10:45.517 SGL Metadata Pointer: Not Supported 00:10:45.517 Oversized SGL: Not Supported 00:10:45.517 SGL Metadata Address: Not Supported 00:10:45.517 SGL Offset: Not Supported 00:10:45.517 Transport SGL Data Block: Not Supported 00:10:45.517 Replay Protected Memory Block: Not Supported 00:10:45.517 00:10:45.517 Firmware Slot Information 00:10:45.517 ========================= 00:10:45.517 Active slot: 1 00:10:45.517 Slot 1 Firmware Revision: 1.0 00:10:45.517 00:10:45.517 00:10:45.517 Commands Supported and Effects 00:10:45.517 ============================== 00:10:45.517 Admin Commands 00:10:45.517 -------------- 00:10:45.517 Delete I/O Submission Queue (00h): Supported 00:10:45.517 Create I/O Submission Queue (01h): Supported 00:10:45.517 Get Log Page (02h): Supported 00:10:45.517 Delete I/O Completion Queue (04h): Supported 00:10:45.517 Create I/O Completion Queue (05h): Supported 00:10:45.517 Identify (06h): Supported 00:10:45.517 Abort (08h): Supported 00:10:45.517 Set Features (09h): Supported 00:10:45.517 Get Features (0Ah): Supported 00:10:45.517 Asynchronous Event Request (0Ch): Supported 00:10:45.517 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:45.517 Directive Send (19h): Supported 00:10:45.517 Directive Receive (1Ah): Supported 00:10:45.517 Virtualization Management (1Ch): Supported 00:10:45.517 Doorbell Buffer Config (7Ch): Supported 00:10:45.517 Format NVM (80h): Supported LBA-Change 00:10:45.517 I/O Commands 00:10:45.517 ------------ 00:10:45.517 Flush (00h): Supported LBA-Change 00:10:45.517 Write (01h): Supported LBA-Change 00:10:45.517 Read (02h): Supported 00:10:45.517 Compare (05h): Supported 00:10:45.517 Write Zeroes (08h): Supported LBA-Change 00:10:45.517 Dataset Management (09h): Supported LBA-Change 00:10:45.517 Unknown (0Ch): Supported 00:10:45.517 Unknown (12h): Supported 00:10:45.517 Copy (19h): Supported LBA-Change 00:10:45.517 Unknown (1Dh): Supported LBA-Change 00:10:45.517 00:10:45.517 Error Log 00:10:45.517 ========= 00:10:45.517 00:10:45.517 Arbitration 00:10:45.517 =========== 00:10:45.517 Arbitration Burst: no limit 00:10:45.517 00:10:45.517 Power Management 00:10:45.517 ================ 00:10:45.517 Number of Power States: 1 00:10:45.517 Current Power State: Power State #0 00:10:45.517 Power State #0: 00:10:45.517 Max Power: 25.00 W 00:10:45.517 Non-Operational State: Operational 00:10:45.517 Entry Latency: 16 microseconds 00:10:45.517 Exit Latency: 4 microseconds 00:10:45.517 Relative Read Throughput: 0 00:10:45.517 Relative Read Latency: 0 00:10:45.517 Relative Write Throughput: 0 00:10:45.517 Relative Write Latency: 0 00:10:45.517 Idle Power: Not Reported 00:10:45.517 Active Power: Not Reported 00:10:45.517 Non-Operational Permissive Mode: Not Supported 00:10:45.517 00:10:45.517 Health Information 00:10:45.517 ================== 00:10:45.517 Critical Warnings: 00:10:45.517 Available Spare Space: OK 00:10:45.517 Temperature: OK 00:10:45.517 Device Reliability: OK 00:10:45.517 Read Only: No 00:10:45.517 Volatile Memory Backup: OK 00:10:45.517 Current Temperature: 323 Kelvin (50 Celsius) 00:10:45.517 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:45.518 Available Spare: 0% 00:10:45.518 Available Spare Threshold: 0% 00:10:45.518 Life Percentage Used: 0% 00:10:45.518 Data Units Read: 1031 00:10:45.518 Data Units Written: 925 00:10:45.518 Host Read Commands: 36830 00:10:45.518 Host Write Commands: 35420 00:10:45.518 Controller Busy Time: 0 minutes 00:10:45.518 Power Cycles: 0 00:10:45.518 Power On Hours: 0 hours 00:10:45.518 Unsafe Shutdowns: 0 00:10:45.518 Unrecoverable Media Errors: 0 00:10:45.518 Lifetime Error Log Entries: 0 00:10:45.518 Warning Temperature Time: 0 minutes 00:10:45.518 Critical Temperature Time: 0 minutes 00:10:45.518 00:10:45.518 Number of Queues 00:10:45.518 ================ 00:10:45.518 Number of I/O Submission Queues: 64 00:10:45.518 Number of I/O Completion Queues: 64 00:10:45.518 00:10:45.518 ZNS Specific Controller Data 00:10:45.518 ============================ 00:10:45.518 Zone Append Size Limit: 0 00:10:45.518 00:10:45.518 00:10:45.518 Active Namespaces 00:10:45.518 ================= 00:10:45.518 Namespace ID:1 00:10:45.518 Error Recovery Timeout: Unlimited 00:10:45.518 Command Set Identifier: NVM (00h) 00:10:45.518 Deallocate: Supported 00:10:45.518 Deallocated/Unwritten Error: Supported 00:10:45.518 Deallocated Read Value: All 0x00 00:10:45.518 Deallocate in Write Zeroes: Not Supported 00:10:45.518 Deallocated Guard Field: 0xFFFF 00:10:45.518 Flush: Supported 00:10:45.518 Reservation: Not Supported 00:10:45.518 Namespace Sharing Capabilities: Multiple Controllers 00:10:45.518 Size (in LBAs): 262144 (1GiB) 00:10:45.518 Capacity (in LBAs): 262144 (1GiB) 00:10:45.518 Utilization (in LBAs): 262144 (1GiB) 00:10:45.518 Thin Provisioning: Not Supported 00:10:45.518 Per-NS Atomic Units: No 00:10:45.518 Maximum Single Source Range Length: 128 00:10:45.518 Maximum Copy Length: 128 00:10:45.518 Maximum Source Range Count: 128 00:10:45.518 NGUID/EUI64 Never Reused: No 00:10:45.518 Namespace Write Protected: No 00:10:45.518 Endurance group ID: 1 00:10:45.518 Number of LBA Formats: 8 00:10:45.518 Current LBA Format: LBA Format #04 00:10:45.518 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:45.518 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:45.518 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:45.518 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:45.518 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:45.518 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:45.518 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:45.518 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:45.518 00:10:45.518 Get Feature FDP: 00:10:45.518 ================ 00:10:45.518 Enabled: Yes 00:10:45.518 FDP configuration index: 0 00:10:45.518 00:10:45.518 FDP configurations log page 00:10:45.518 =========================== 00:10:45.518 Number of FDP configurations: 1 00:10:45.518 Version: 0 00:10:45.518 Size: 112 00:10:45.518 FDP Configuration Descriptor: 0 00:10:45.518 Descriptor Size: 96 00:10:45.518 Reclaim Group Identifier format: 2 00:10:45.518 FDP Volatile Write Cache: Not Present 00:10:45.518 FDP Configuration: Valid 00:10:45.518 Vendor Specific Size: 0 00:10:45.518 Number of Reclaim Groups: 2 00:10:45.518 Number of Recalim Unit Handles: 8 00:10:45.518 Max Placement Identifiers: 128 00:10:45.518 Number of Namespaces Suppprted: 256 00:10:45.518 Reclaim unit Nominal Size: 6000000 bytes 00:10:45.518 Estimated Reclaim Unit Time Limit: Not Reported 00:10:45.518 RUH Desc #000: RUH Type: Initially Isolated 00:10:45.518 RUH Desc #001: RUH Type: Initially Isolated 00:10:45.518 RUH Desc #002: RUH Type: Initially Isolated 00:10:45.518 RUH Desc #003: RUH Type: Initially Isolated 00:10:45.518 RUH Desc #004: RUH Type: Initially Isolated 00:10:45.518 RUH Desc #005: RUH Type: Initially Isolated 00:10:45.518 RUH Desc #006: RUH Type: Initially Isolated 00:10:45.518 RUH Desc #007: RUH Type: Initially Isolated 00:10:45.518 00:10:45.518 FDP reclaim unit handle usage log page 00:10:45.778 ====================================== 00:10:45.778 Number of Reclaim Unit Handles: 8 00:10:45.778 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:45.778 RUH Usage Desc #001: RUH Attributes: Unused 00:10:45.778 RUH Usage Desc #002: RUH Attributes: Unused 00:10:45.778 RUH Usage Desc #003: RUH Attributes: Unused 00:10:45.778 RUH Usage Desc #004: RUH Attributes: Unused 00:10:45.778 RUH Usage Desc #005: RUH Attributes: Unused 00:10:45.778 RUH Usage Desc #006: RUH Attributes: Unused 00:10:45.778 RUH Usage Desc #007: RUH Attributes: Unused 00:10:45.778 00:10:45.778 FDP statistics log page 00:10:45.778 ======================= 00:10:45.778 Host bytes with metadata written: 572366848 00:10:45.778 Media bytes with metadata written: 572444672 00:10:45.778 Media bytes erased: 0 00:10:45.778 00:10:45.778 FDP events log page 00:10:45.778 =================== 00:10:45.778 Number of FDP events: 0 00:10:45.778 00:10:45.778 NVM Specific Namespace Data 00:10:45.778 =========================== 00:10:45.778 Logical Block Storage Tag Mask: 0 00:10:45.778 Protection Information Capabilities: 00:10:45.778 16b Guard Protection Information Storage Tag Support: No 00:10:45.778 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:45.778 Storage Tag Check Read Support: No 00:10:45.778 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.778 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.778 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.778 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.778 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.778 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.778 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.778 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.778 00:10:45.778 real 0m1.506s 00:10:45.778 user 0m0.542s 00:10:45.778 sys 0m0.752s 00:10:45.778 15:06:23 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:45.778 15:06:23 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:45.778 ************************************ 00:10:45.778 END TEST nvme_identify 00:10:45.778 ************************************ 00:10:45.778 15:06:23 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:45.778 15:06:23 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:45.778 15:06:23 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:45.778 15:06:23 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:45.778 15:06:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:45.778 ************************************ 00:10:45.778 START TEST nvme_perf 00:10:45.778 ************************************ 00:10:45.778 15:06:23 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:10:45.778 15:06:23 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:47.157 Initializing NVMe Controllers 00:10:47.158 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:47.158 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:47.158 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:47.158 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:47.158 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:47.158 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:47.158 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:47.158 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:47.158 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:47.158 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:47.158 Initialization complete. Launching workers. 00:10:47.158 ======================================================== 00:10:47.158 Latency(us) 00:10:47.158 Device Information : IOPS MiB/s Average min max 00:10:47.158 PCIE (0000:00:10.0) NSID 1 from core 0: 15608.84 182.92 8225.26 6808.76 47068.12 00:10:47.158 PCIE (0000:00:11.0) NSID 1 from core 0: 15608.84 182.92 8212.16 6893.58 44682.58 00:10:47.158 PCIE (0000:00:13.0) NSID 1 from core 0: 15608.84 182.92 8197.63 6892.10 42919.43 00:10:47.158 PCIE (0000:00:12.0) NSID 1 from core 0: 15608.84 182.92 8182.03 6885.40 40339.36 00:10:47.158 PCIE (0000:00:12.0) NSID 2 from core 0: 15608.84 182.92 8166.97 6907.12 37912.14 00:10:47.158 PCIE (0000:00:12.0) NSID 3 from core 0: 15672.81 183.67 8117.94 6898.26 30928.92 00:10:47.158 ======================================================== 00:10:47.158 Total : 93716.98 1098.25 8183.62 6808.76 47068.12 00:10:47.158 00:10:47.158 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:47.158 ================================================================================= 00:10:47.158 1.00000% : 7068.730us 00:10:47.158 10.00000% : 7383.532us 00:10:47.158 25.00000% : 7555.242us 00:10:47.158 50.00000% : 7841.425us 00:10:47.158 75.00000% : 8184.845us 00:10:47.158 90.00000% : 8413.792us 00:10:47.158 95.00000% : 8642.739us 00:10:47.158 98.00000% : 12992.727us 00:10:47.158 99.00000% : 15682.851us 00:10:47.158 99.50000% : 39836.730us 00:10:47.158 99.90000% : 46705.132us 00:10:47.158 99.99000% : 47163.025us 00:10:47.158 99.99900% : 47163.025us 00:10:47.158 99.99990% : 47163.025us 00:10:47.158 99.99999% : 47163.025us 00:10:47.158 00:10:47.158 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:47.158 ================================================================================= 00:10:47.158 1.00000% : 7125.967us 00:10:47.158 10.00000% : 7440.769us 00:10:47.158 25.00000% : 7612.479us 00:10:47.158 50.00000% : 7841.425us 00:10:47.158 75.00000% : 8127.609us 00:10:47.158 90.00000% : 8356.555us 00:10:47.158 95.00000% : 8585.502us 00:10:47.158 98.00000% : 12935.490us 00:10:47.158 99.00000% : 15224.957us 00:10:47.158 99.50000% : 37776.210us 00:10:47.158 99.90000% : 44415.665us 00:10:47.158 99.99000% : 44873.558us 00:10:47.158 99.99900% : 44873.558us 00:10:47.158 99.99990% : 44873.558us 00:10:47.158 99.99999% : 44873.558us 00:10:47.158 00:10:47.158 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:47.158 ================================================================================= 00:10:47.158 1.00000% : 7125.967us 00:10:47.158 10.00000% : 7440.769us 00:10:47.158 25.00000% : 7612.479us 00:10:47.158 50.00000% : 7841.425us 00:10:47.158 75.00000% : 8127.609us 00:10:47.158 90.00000% : 8356.555us 00:10:47.158 95.00000% : 8528.266us 00:10:47.158 98.00000% : 12821.017us 00:10:47.158 99.00000% : 15568.377us 00:10:47.158 99.50000% : 36173.583us 00:10:47.158 99.90000% : 42584.091us 00:10:47.158 99.99000% : 43041.984us 00:10:47.158 99.99900% : 43041.984us 00:10:47.158 99.99990% : 43041.984us 00:10:47.158 99.99999% : 43041.984us 00:10:47.158 00:10:47.158 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:47.158 ================================================================================= 00:10:47.158 1.00000% : 7125.967us 00:10:47.158 10.00000% : 7440.769us 00:10:47.158 25.00000% : 7612.479us 00:10:47.158 50.00000% : 7841.425us 00:10:47.158 75.00000% : 8127.609us 00:10:47.158 90.00000% : 8356.555us 00:10:47.158 95.00000% : 8528.266us 00:10:47.158 98.00000% : 13221.673us 00:10:47.158 99.00000% : 15911.797us 00:10:47.158 99.50000% : 33655.169us 00:10:47.158 99.90000% : 40065.677us 00:10:47.158 99.99000% : 40523.570us 00:10:47.158 99.99900% : 40523.570us 00:10:47.158 99.99990% : 40523.570us 00:10:47.158 99.99999% : 40523.570us 00:10:47.158 00:10:47.158 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:47.158 ================================================================================= 00:10:47.158 1.00000% : 7125.967us 00:10:47.158 10.00000% : 7440.769us 00:10:47.158 25.00000% : 7612.479us 00:10:47.158 50.00000% : 7841.425us 00:10:47.158 75.00000% : 8127.609us 00:10:47.158 90.00000% : 8356.555us 00:10:47.158 95.00000% : 8528.266us 00:10:47.158 98.00000% : 13393.383us 00:10:47.158 99.00000% : 16255.217us 00:10:47.158 99.50000% : 31365.701us 00:10:47.158 99.90000% : 37547.263us 00:10:47.158 99.99000% : 38005.156us 00:10:47.158 99.99900% : 38005.156us 00:10:47.158 99.99990% : 38005.156us 00:10:47.158 99.99999% : 38005.156us 00:10:47.158 00:10:47.158 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:47.158 ================================================================================= 00:10:47.158 1.00000% : 7125.967us 00:10:47.158 10.00000% : 7440.769us 00:10:47.158 25.00000% : 7612.479us 00:10:47.158 50.00000% : 7841.425us 00:10:47.158 75.00000% : 8127.609us 00:10:47.158 90.00000% : 8356.555us 00:10:47.158 95.00000% : 8585.502us 00:10:47.158 98.00000% : 13221.673us 00:10:47.158 99.00000% : 16255.217us 00:10:47.158 99.50000% : 23581.513us 00:10:47.158 99.90000% : 30449.914us 00:10:47.158 99.99000% : 30907.808us 00:10:47.158 99.99900% : 31136.755us 00:10:47.158 99.99990% : 31136.755us 00:10:47.158 99.99999% : 31136.755us 00:10:47.158 00:10:47.158 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:47.158 ============================================================================== 00:10:47.158 Range in us Cumulative IO count 00:10:47.158 6782.547 - 6811.165: 0.0128% ( 2) 00:10:47.158 6839.783 - 6868.402: 0.0384% ( 4) 00:10:47.158 6868.402 - 6897.020: 0.0897% ( 8) 00:10:47.158 6897.020 - 6925.638: 0.1473% ( 9) 00:10:47.158 6925.638 - 6954.257: 0.2818% ( 21) 00:10:47.158 6954.257 - 6982.875: 0.4290% ( 23) 00:10:47.158 6982.875 - 7011.493: 0.6276% ( 31) 00:10:47.158 7011.493 - 7040.112: 0.9285% ( 47) 00:10:47.158 7040.112 - 7068.730: 1.2487% ( 50) 00:10:47.158 7068.730 - 7097.348: 1.7034% ( 71) 00:10:47.158 7097.348 - 7125.967: 2.1260% ( 66) 00:10:47.158 7125.967 - 7154.585: 2.7472% ( 97) 00:10:47.158 7154.585 - 7183.203: 3.5156% ( 120) 00:10:47.158 7183.203 - 7211.822: 4.4442% ( 145) 00:10:47.158 7211.822 - 7240.440: 5.4239% ( 153) 00:10:47.158 7240.440 - 7269.059: 6.5830% ( 181) 00:10:47.158 7269.059 - 7297.677: 8.0238% ( 225) 00:10:47.158 7297.677 - 7326.295: 9.7208% ( 265) 00:10:47.158 7326.295 - 7383.532: 13.3965% ( 574) 00:10:47.158 7383.532 - 7440.769: 17.8151% ( 690) 00:10:47.158 7440.769 - 7498.005: 22.2144% ( 687) 00:10:47.158 7498.005 - 7555.242: 26.8763% ( 728) 00:10:47.158 7555.242 - 7612.479: 31.7943% ( 768) 00:10:47.158 7612.479 - 7669.715: 36.5779% ( 747) 00:10:47.158 7669.715 - 7726.952: 41.1821% ( 719) 00:10:47.158 7726.952 - 7784.189: 45.9849% ( 750) 00:10:47.158 7784.189 - 7841.425: 50.8709% ( 763) 00:10:47.158 7841.425 - 7898.662: 55.6737% ( 750) 00:10:47.158 7898.662 - 7955.899: 60.5341% ( 759) 00:10:47.158 7955.899 - 8013.135: 65.3176% ( 747) 00:10:47.158 8013.135 - 8070.372: 69.8963% ( 715) 00:10:47.158 8070.372 - 8127.609: 74.2700% ( 683) 00:10:47.158 8127.609 - 8184.845: 78.4324% ( 650) 00:10:47.158 8184.845 - 8242.082: 82.3450% ( 611) 00:10:47.158 8242.082 - 8299.319: 85.9055% ( 556) 00:10:47.158 8299.319 - 8356.555: 88.8128% ( 454) 00:10:47.158 8356.555 - 8413.792: 91.2654% ( 383) 00:10:47.158 8413.792 - 8471.029: 92.9303% ( 260) 00:10:47.158 8471.029 - 8528.266: 94.1086% ( 184) 00:10:47.158 8528.266 - 8585.502: 94.8706% ( 119) 00:10:47.158 8585.502 - 8642.739: 95.3445% ( 74) 00:10:47.158 8642.739 - 8699.976: 95.6455% ( 47) 00:10:47.158 8699.976 - 8757.212: 95.8184% ( 27) 00:10:47.158 8757.212 - 8814.449: 95.9657% ( 23) 00:10:47.158 8814.449 - 8871.686: 96.0873% ( 19) 00:10:47.158 8871.686 - 8928.922: 96.1898% ( 16) 00:10:47.158 8928.922 - 8986.159: 96.2923% ( 16) 00:10:47.158 8986.159 - 9043.396: 96.3691% ( 12) 00:10:47.158 9043.396 - 9100.632: 96.4203% ( 8) 00:10:47.158 9100.632 - 9157.869: 96.4652% ( 7) 00:10:47.158 9157.869 - 9215.106: 96.4972% ( 5) 00:10:47.158 9215.106 - 9272.342: 96.5420% ( 7) 00:10:47.158 9272.342 - 9329.579: 96.5740% ( 5) 00:10:47.158 9329.579 - 9386.816: 96.6189% ( 7) 00:10:47.158 9386.816 - 9444.052: 96.6445% ( 4) 00:10:47.158 9444.052 - 9501.289: 96.6573% ( 2) 00:10:47.158 9501.289 - 9558.526: 96.6765% ( 3) 00:10:47.158 9558.526 - 9615.762: 96.7085% ( 5) 00:10:47.158 9615.762 - 9672.999: 96.7597% ( 8) 00:10:47.158 9672.999 - 9730.236: 96.7789% ( 3) 00:10:47.158 9730.236 - 9787.472: 96.7918% ( 2) 00:10:47.158 9787.472 - 9844.709: 96.8110% ( 3) 00:10:47.158 9844.709 - 9901.946: 96.8430% ( 5) 00:10:47.158 9901.946 - 9959.183: 96.8686% ( 4) 00:10:47.158 9959.183 - 10016.419: 96.8942% ( 4) 00:10:47.158 10016.419 - 10073.656: 96.9454% ( 8) 00:10:47.158 10073.656 - 10130.893: 96.9647% ( 3) 00:10:47.158 10130.893 - 10188.129: 96.9903% ( 4) 00:10:47.158 10188.129 - 10245.366: 97.0351% ( 7) 00:10:47.158 10245.366 - 10302.603: 97.0543% ( 3) 00:10:47.158 10302.603 - 10359.839: 97.0927% ( 6) 00:10:47.158 10359.839 - 10417.076: 97.1119% ( 3) 00:10:47.158 10417.076 - 10474.313: 97.1568% ( 7) 00:10:47.158 10474.313 - 10531.549: 97.1696% ( 2) 00:10:47.158 10531.549 - 10588.786: 97.2080% ( 6) 00:10:47.158 10588.786 - 10646.023: 97.2336% ( 4) 00:10:47.158 10646.023 - 10703.259: 97.2656% ( 5) 00:10:47.159 10703.259 - 10760.496: 97.2976% ( 5) 00:10:47.159 10817.733 - 10874.969: 97.3169% ( 3) 00:10:47.159 10874.969 - 10932.206: 97.3233% ( 1) 00:10:47.159 10932.206 - 10989.443: 97.3361% ( 2) 00:10:47.159 10989.443 - 11046.679: 97.3489% ( 2) 00:10:47.159 11046.679 - 11103.916: 97.3553% ( 1) 00:10:47.159 11103.916 - 11161.153: 97.3681% ( 2) 00:10:47.159 11161.153 - 11218.390: 97.3745% ( 1) 00:10:47.159 11218.390 - 11275.626: 97.3937% ( 3) 00:10:47.159 11275.626 - 11332.863: 97.4001% ( 1) 00:10:47.159 11332.863 - 11390.100: 97.4129% ( 2) 00:10:47.159 11390.100 - 11447.336: 97.4193% ( 1) 00:10:47.159 11447.336 - 11504.573: 97.4321% ( 2) 00:10:47.159 11504.573 - 11561.810: 97.4449% ( 2) 00:10:47.159 11561.810 - 11619.046: 97.4513% ( 1) 00:10:47.159 11619.046 - 11676.283: 97.4641% ( 2) 00:10:47.159 11676.283 - 11733.520: 97.4705% ( 1) 00:10:47.159 11733.520 - 11790.756: 97.4834% ( 2) 00:10:47.159 11790.756 - 11847.993: 97.4898% ( 1) 00:10:47.159 11847.993 - 11905.230: 97.5026% ( 2) 00:10:47.159 11905.230 - 11962.466: 97.5090% ( 1) 00:10:47.159 11962.466 - 12019.703: 97.5282% ( 3) 00:10:47.159 12019.703 - 12076.940: 97.5346% ( 1) 00:10:47.159 12076.940 - 12134.176: 97.5410% ( 1) 00:10:47.159 12191.413 - 12248.650: 97.5730% ( 5) 00:10:47.159 12248.650 - 12305.886: 97.5986% ( 4) 00:10:47.159 12305.886 - 12363.123: 97.6242% ( 4) 00:10:47.159 12363.123 - 12420.360: 97.6627% ( 6) 00:10:47.159 12420.360 - 12477.597: 97.6883% ( 4) 00:10:47.159 12477.597 - 12534.833: 97.7075% ( 3) 00:10:47.159 12534.833 - 12592.070: 97.7459% ( 6) 00:10:47.159 12592.070 - 12649.307: 97.7715% ( 4) 00:10:47.159 12649.307 - 12706.543: 97.8099% ( 6) 00:10:47.159 12706.543 - 12763.780: 97.8356% ( 4) 00:10:47.159 12763.780 - 12821.017: 97.8740% ( 6) 00:10:47.159 12821.017 - 12878.253: 97.9124% ( 6) 00:10:47.159 12878.253 - 12935.490: 97.9380% ( 4) 00:10:47.159 12935.490 - 12992.727: 98.0085% ( 11) 00:10:47.159 12992.727 - 13049.963: 98.0597% ( 8) 00:10:47.159 13049.963 - 13107.200: 98.1109% ( 8) 00:10:47.159 13107.200 - 13164.437: 98.1493% ( 6) 00:10:47.159 13164.437 - 13221.673: 98.2198% ( 11) 00:10:47.159 13221.673 - 13278.910: 98.2646% ( 7) 00:10:47.159 13278.910 - 13336.147: 98.3094% ( 7) 00:10:47.159 13336.147 - 13393.383: 98.3671% ( 9) 00:10:47.159 13393.383 - 13450.620: 98.4183% ( 8) 00:10:47.159 13450.620 - 13507.857: 98.4695% ( 8) 00:10:47.159 13507.857 - 13565.093: 98.5015% ( 5) 00:10:47.159 13565.093 - 13622.330: 98.5336% ( 5) 00:10:47.159 13622.330 - 13679.567: 98.5656% ( 5) 00:10:47.159 13679.567 - 13736.803: 98.6040% ( 6) 00:10:47.159 13736.803 - 13794.040: 98.6424% ( 6) 00:10:47.159 13794.040 - 13851.277: 98.6744% ( 5) 00:10:47.159 13851.277 - 13908.514: 98.7065% ( 5) 00:10:47.159 13908.514 - 13965.750: 98.7193% ( 2) 00:10:47.159 13965.750 - 14022.987: 98.7577% ( 6) 00:10:47.159 14022.987 - 14080.224: 98.7705% ( 2) 00:10:47.159 14881.537 - 14996.010: 98.7961% ( 4) 00:10:47.159 14996.010 - 15110.484: 98.8281% ( 5) 00:10:47.159 15110.484 - 15224.957: 98.8730% ( 7) 00:10:47.159 15224.957 - 15339.431: 98.8986% ( 4) 00:10:47.159 15339.431 - 15453.904: 98.9434% ( 7) 00:10:47.159 15453.904 - 15568.377: 98.9754% ( 5) 00:10:47.159 15568.377 - 15682.851: 99.0202% ( 7) 00:10:47.159 15682.851 - 15797.324: 99.0587% ( 6) 00:10:47.159 15797.324 - 15911.797: 99.0907% ( 5) 00:10:47.159 15911.797 - 16026.271: 99.1291% ( 6) 00:10:47.159 16026.271 - 16140.744: 99.1675% ( 6) 00:10:47.159 16140.744 - 16255.217: 99.1803% ( 2) 00:10:47.159 38005.156 - 38234.103: 99.2188% ( 6) 00:10:47.159 38234.103 - 38463.050: 99.2636% ( 7) 00:10:47.159 38463.050 - 38691.997: 99.3084% ( 7) 00:10:47.159 38691.997 - 38920.943: 99.3532% ( 7) 00:10:47.159 38920.943 - 39149.890: 99.3981% ( 7) 00:10:47.159 39149.890 - 39378.837: 99.4429% ( 7) 00:10:47.159 39378.837 - 39607.783: 99.4877% ( 7) 00:10:47.159 39607.783 - 39836.730: 99.5325% ( 7) 00:10:47.159 39836.730 - 40065.677: 99.5774% ( 7) 00:10:47.159 40065.677 - 40294.624: 99.5902% ( 2) 00:10:47.159 44873.558 - 45102.505: 99.6158% ( 4) 00:10:47.159 45102.505 - 45331.452: 99.6542% ( 6) 00:10:47.159 45331.452 - 45560.398: 99.7118% ( 9) 00:10:47.159 45560.398 - 45789.345: 99.7503% ( 6) 00:10:47.159 45789.345 - 46018.292: 99.8015% ( 8) 00:10:47.159 46018.292 - 46247.238: 99.8399% ( 6) 00:10:47.159 46247.238 - 46476.185: 99.8847% ( 7) 00:10:47.159 46476.185 - 46705.132: 99.9296% ( 7) 00:10:47.159 46705.132 - 46934.079: 99.9808% ( 8) 00:10:47.159 46934.079 - 47163.025: 100.0000% ( 3) 00:10:47.159 00:10:47.159 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:47.159 ============================================================================== 00:10:47.159 Range in us Cumulative IO count 00:10:47.159 6868.402 - 6897.020: 0.0064% ( 1) 00:10:47.159 6897.020 - 6925.638: 0.0256% ( 3) 00:10:47.159 6925.638 - 6954.257: 0.0384% ( 2) 00:10:47.159 6954.257 - 6982.875: 0.0897% ( 8) 00:10:47.159 6982.875 - 7011.493: 0.1729% ( 13) 00:10:47.159 7011.493 - 7040.112: 0.3074% ( 21) 00:10:47.159 7040.112 - 7068.730: 0.4739% ( 26) 00:10:47.159 7068.730 - 7097.348: 0.7364% ( 41) 00:10:47.159 7097.348 - 7125.967: 1.0374% ( 47) 00:10:47.159 7125.967 - 7154.585: 1.4600% ( 66) 00:10:47.159 7154.585 - 7183.203: 1.9147% ( 71) 00:10:47.159 7183.203 - 7211.822: 2.4526% ( 84) 00:10:47.159 7211.822 - 7240.440: 3.1506% ( 109) 00:10:47.159 7240.440 - 7269.059: 3.9639% ( 127) 00:10:47.159 7269.059 - 7297.677: 5.1101% ( 179) 00:10:47.159 7297.677 - 7326.295: 6.2820% ( 183) 00:10:47.159 7326.295 - 7383.532: 9.2085% ( 457) 00:10:47.159 7383.532 - 7440.769: 12.6857% ( 543) 00:10:47.159 7440.769 - 7498.005: 17.4180% ( 739) 00:10:47.159 7498.005 - 7555.242: 22.7395% ( 831) 00:10:47.159 7555.242 - 7612.479: 28.3043% ( 869) 00:10:47.159 7612.479 - 7669.715: 33.9652% ( 884) 00:10:47.159 7669.715 - 7726.952: 39.9462% ( 934) 00:10:47.159 7726.952 - 7784.189: 45.7031% ( 899) 00:10:47.159 7784.189 - 7841.425: 51.4408% ( 896) 00:10:47.159 7841.425 - 7898.662: 57.1337% ( 889) 00:10:47.159 7898.662 - 7955.899: 62.9034% ( 901) 00:10:47.159 7955.899 - 8013.135: 68.3786% ( 855) 00:10:47.159 8013.135 - 8070.372: 73.5207% ( 803) 00:10:47.159 8070.372 - 8127.609: 78.2531% ( 739) 00:10:47.159 8127.609 - 8184.845: 82.6588% ( 688) 00:10:47.159 8184.845 - 8242.082: 86.5010% ( 600) 00:10:47.159 8242.082 - 8299.319: 89.6580% ( 493) 00:10:47.159 8299.319 - 8356.555: 91.9954% ( 365) 00:10:47.159 8356.555 - 8413.792: 93.4810% ( 232) 00:10:47.159 8413.792 - 8471.029: 94.4096% ( 145) 00:10:47.159 8471.029 - 8528.266: 94.9475% ( 84) 00:10:47.159 8528.266 - 8585.502: 95.2869% ( 53) 00:10:47.159 8585.502 - 8642.739: 95.5110% ( 35) 00:10:47.159 8642.739 - 8699.976: 95.7031% ( 30) 00:10:47.159 8699.976 - 8757.212: 95.8568% ( 24) 00:10:47.159 8757.212 - 8814.449: 95.9721% ( 18) 00:10:47.159 8814.449 - 8871.686: 96.0169% ( 7) 00:10:47.159 8871.686 - 8928.922: 96.0681% ( 8) 00:10:47.159 8928.922 - 8986.159: 96.0938% ( 4) 00:10:47.159 8986.159 - 9043.396: 96.1194% ( 4) 00:10:47.159 9043.396 - 9100.632: 96.1450% ( 4) 00:10:47.159 9100.632 - 9157.869: 96.1834% ( 6) 00:10:47.159 9157.869 - 9215.106: 96.2282% ( 7) 00:10:47.159 9215.106 - 9272.342: 96.3115% ( 13) 00:10:47.159 9272.342 - 9329.579: 96.3563% ( 7) 00:10:47.159 9329.579 - 9386.816: 96.4011% ( 7) 00:10:47.159 9386.816 - 9444.052: 96.4844% ( 13) 00:10:47.159 9444.052 - 9501.289: 96.5548% ( 11) 00:10:47.159 9501.289 - 9558.526: 96.6060% ( 8) 00:10:47.159 9558.526 - 9615.762: 96.6637% ( 9) 00:10:47.159 9615.762 - 9672.999: 96.7149% ( 8) 00:10:47.159 9672.999 - 9730.236: 96.7661% ( 8) 00:10:47.159 9730.236 - 9787.472: 96.8110% ( 7) 00:10:47.159 9787.472 - 9844.709: 96.8622% ( 8) 00:10:47.159 9844.709 - 9901.946: 96.9134% ( 8) 00:10:47.159 9901.946 - 9959.183: 96.9647% ( 8) 00:10:47.159 9959.183 - 10016.419: 97.0031% ( 6) 00:10:47.159 10016.419 - 10073.656: 97.0287% ( 4) 00:10:47.159 10073.656 - 10130.893: 97.0543% ( 4) 00:10:47.159 10130.893 - 10188.129: 97.0799% ( 4) 00:10:47.159 10188.129 - 10245.366: 97.1055% ( 4) 00:10:47.159 10245.366 - 10302.603: 97.1311% ( 4) 00:10:47.159 10760.496 - 10817.733: 97.1440% ( 2) 00:10:47.159 10817.733 - 10874.969: 97.1568% ( 2) 00:10:47.159 10874.969 - 10932.206: 97.1888% ( 5) 00:10:47.159 10932.206 - 10989.443: 97.2080% ( 3) 00:10:47.159 11046.679 - 11103.916: 97.2144% ( 1) 00:10:47.159 11103.916 - 11161.153: 97.2208% ( 1) 00:10:47.159 11161.153 - 11218.390: 97.2464% ( 4) 00:10:47.159 11275.626 - 11332.863: 97.2720% ( 4) 00:10:47.159 11332.863 - 11390.100: 97.2784% ( 1) 00:10:47.159 11390.100 - 11447.336: 97.2912% ( 2) 00:10:47.159 11447.336 - 11504.573: 97.3040% ( 2) 00:10:47.159 11504.573 - 11561.810: 97.3169% ( 2) 00:10:47.159 11561.810 - 11619.046: 97.3297% ( 2) 00:10:47.159 11619.046 - 11676.283: 97.3361% ( 1) 00:10:47.159 11676.283 - 11733.520: 97.3617% ( 4) 00:10:47.159 11733.520 - 11790.756: 97.3937% ( 5) 00:10:47.159 11790.756 - 11847.993: 97.4193% ( 4) 00:10:47.159 11847.993 - 11905.230: 97.4513% ( 5) 00:10:47.159 11905.230 - 11962.466: 97.4834% ( 5) 00:10:47.159 11962.466 - 12019.703: 97.5218% ( 6) 00:10:47.159 12019.703 - 12076.940: 97.5602% ( 6) 00:10:47.159 12076.940 - 12134.176: 97.5986% ( 6) 00:10:47.159 12134.176 - 12191.413: 97.6242% ( 4) 00:10:47.159 12191.413 - 12248.650: 97.6562% ( 5) 00:10:47.159 12248.650 - 12305.886: 97.6947% ( 6) 00:10:47.159 12305.886 - 12363.123: 97.7267% ( 5) 00:10:47.159 12363.123 - 12420.360: 97.7587% ( 5) 00:10:47.159 12420.360 - 12477.597: 97.7907% ( 5) 00:10:47.159 12477.597 - 12534.833: 97.8227% ( 5) 00:10:47.159 12534.833 - 12592.070: 97.8548% ( 5) 00:10:47.159 12592.070 - 12649.307: 97.8804% ( 4) 00:10:47.159 12649.307 - 12706.543: 97.9188% ( 6) 00:10:47.159 12706.543 - 12763.780: 97.9444% ( 4) 00:10:47.159 12763.780 - 12821.017: 97.9700% ( 4) 00:10:47.159 12821.017 - 12878.253: 97.9892% ( 3) 00:10:47.159 12878.253 - 12935.490: 98.0020% ( 2) 00:10:47.159 12935.490 - 12992.727: 98.0149% ( 2) 00:10:47.159 12992.727 - 13049.963: 98.0277% ( 2) 00:10:47.159 13049.963 - 13107.200: 98.0405% ( 2) 00:10:47.160 13107.200 - 13164.437: 98.0533% ( 2) 00:10:47.160 13164.437 - 13221.673: 98.0725% ( 3) 00:10:47.160 13221.673 - 13278.910: 98.0853% ( 2) 00:10:47.160 13278.910 - 13336.147: 98.0981% ( 2) 00:10:47.160 13336.147 - 13393.383: 98.1301% ( 5) 00:10:47.160 13393.383 - 13450.620: 98.1365% ( 1) 00:10:47.160 13450.620 - 13507.857: 98.1493% ( 2) 00:10:47.160 13507.857 - 13565.093: 98.1621% ( 2) 00:10:47.160 13565.093 - 13622.330: 98.1749% ( 2) 00:10:47.160 13622.330 - 13679.567: 98.1942% ( 3) 00:10:47.160 13679.567 - 13736.803: 98.2070% ( 2) 00:10:47.160 13736.803 - 13794.040: 98.2198% ( 2) 00:10:47.160 13794.040 - 13851.277: 98.2646% ( 7) 00:10:47.160 13851.277 - 13908.514: 98.3222% ( 9) 00:10:47.160 13908.514 - 13965.750: 98.3671% ( 7) 00:10:47.160 13965.750 - 14022.987: 98.3991% ( 5) 00:10:47.160 14022.987 - 14080.224: 98.4311% ( 5) 00:10:47.160 14080.224 - 14137.460: 98.4695% ( 6) 00:10:47.160 14137.460 - 14194.697: 98.5079% ( 6) 00:10:47.160 14194.697 - 14251.934: 98.5464% ( 6) 00:10:47.160 14251.934 - 14309.170: 98.5784% ( 5) 00:10:47.160 14309.170 - 14366.407: 98.6168% ( 6) 00:10:47.160 14366.407 - 14423.644: 98.6552% ( 6) 00:10:47.160 14423.644 - 14480.880: 98.6808% ( 4) 00:10:47.160 14480.880 - 14538.117: 98.7065% ( 4) 00:10:47.160 14538.117 - 14595.354: 98.7257% ( 3) 00:10:47.160 14595.354 - 14652.590: 98.7705% ( 7) 00:10:47.160 14652.590 - 14767.064: 98.8409% ( 11) 00:10:47.160 14767.064 - 14881.537: 98.8794% ( 6) 00:10:47.160 14881.537 - 14996.010: 98.9370% ( 9) 00:10:47.160 14996.010 - 15110.484: 98.9818% ( 7) 00:10:47.160 15110.484 - 15224.957: 99.0202% ( 6) 00:10:47.160 15224.957 - 15339.431: 99.0651% ( 7) 00:10:47.160 15339.431 - 15453.904: 99.1099% ( 7) 00:10:47.160 15453.904 - 15568.377: 99.1547% ( 7) 00:10:47.160 15568.377 - 15682.851: 99.1803% ( 4) 00:10:47.160 35944.636 - 36173.583: 99.1931% ( 2) 00:10:47.160 36173.583 - 36402.529: 99.2444% ( 8) 00:10:47.160 36402.529 - 36631.476: 99.3020% ( 9) 00:10:47.160 36631.476 - 36860.423: 99.3468% ( 7) 00:10:47.160 36860.423 - 37089.369: 99.3916% ( 7) 00:10:47.160 37089.369 - 37318.316: 99.4429% ( 8) 00:10:47.160 37318.316 - 37547.263: 99.4941% ( 8) 00:10:47.160 37547.263 - 37776.210: 99.5453% ( 8) 00:10:47.160 37776.210 - 38005.156: 99.5902% ( 7) 00:10:47.160 42584.091 - 42813.038: 99.6222% ( 5) 00:10:47.160 42813.038 - 43041.984: 99.6670% ( 7) 00:10:47.160 43041.984 - 43270.931: 99.7118% ( 7) 00:10:47.160 43270.931 - 43499.878: 99.7631% ( 8) 00:10:47.160 43499.878 - 43728.824: 99.8079% ( 7) 00:10:47.160 43728.824 - 43957.771: 99.8527% ( 7) 00:10:47.160 43957.771 - 44186.718: 99.8975% ( 7) 00:10:47.160 44186.718 - 44415.665: 99.9424% ( 7) 00:10:47.160 44415.665 - 44644.611: 99.9872% ( 7) 00:10:47.160 44644.611 - 44873.558: 100.0000% ( 2) 00:10:47.160 00:10:47.160 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:47.160 ============================================================================== 00:10:47.160 Range in us Cumulative IO count 00:10:47.160 6868.402 - 6897.020: 0.0128% ( 2) 00:10:47.160 6897.020 - 6925.638: 0.0256% ( 2) 00:10:47.160 6925.638 - 6954.257: 0.0384% ( 2) 00:10:47.160 6954.257 - 6982.875: 0.0897% ( 8) 00:10:47.160 6982.875 - 7011.493: 0.1857% ( 15) 00:10:47.160 7011.493 - 7040.112: 0.3202% ( 21) 00:10:47.160 7040.112 - 7068.730: 0.5059% ( 29) 00:10:47.160 7068.730 - 7097.348: 0.7556% ( 39) 00:10:47.160 7097.348 - 7125.967: 1.0374% ( 44) 00:10:47.160 7125.967 - 7154.585: 1.4793% ( 69) 00:10:47.160 7154.585 - 7183.203: 1.9659% ( 76) 00:10:47.160 7183.203 - 7211.822: 2.5423% ( 90) 00:10:47.160 7211.822 - 7240.440: 3.3107% ( 120) 00:10:47.160 7240.440 - 7269.059: 4.0920% ( 122) 00:10:47.160 7269.059 - 7297.677: 5.0013% ( 142) 00:10:47.160 7297.677 - 7326.295: 6.1539% ( 180) 00:10:47.160 7326.295 - 7383.532: 9.0164% ( 447) 00:10:47.160 7383.532 - 7440.769: 12.8138% ( 593) 00:10:47.160 7440.769 - 7498.005: 17.7254% ( 767) 00:10:47.160 7498.005 - 7555.242: 22.9828% ( 821) 00:10:47.160 7555.242 - 7612.479: 28.7590% ( 902) 00:10:47.160 7612.479 - 7669.715: 34.2982% ( 865) 00:10:47.160 7669.715 - 7726.952: 39.9782% ( 887) 00:10:47.160 7726.952 - 7784.189: 45.7800% ( 906) 00:10:47.160 7784.189 - 7841.425: 51.7290% ( 929) 00:10:47.160 7841.425 - 7898.662: 57.5756% ( 913) 00:10:47.160 7898.662 - 7955.899: 63.1212% ( 866) 00:10:47.160 7955.899 - 8013.135: 68.4874% ( 838) 00:10:47.160 8013.135 - 8070.372: 73.6488% ( 806) 00:10:47.160 8070.372 - 8127.609: 78.4324% ( 747) 00:10:47.160 8127.609 - 8184.845: 82.7613% ( 676) 00:10:47.160 8184.845 - 8242.082: 86.5202% ( 587) 00:10:47.160 8242.082 - 8299.319: 89.6324% ( 486) 00:10:47.160 8299.319 - 8356.555: 91.9634% ( 364) 00:10:47.160 8356.555 - 8413.792: 93.6347% ( 261) 00:10:47.160 8413.792 - 8471.029: 94.6017% ( 151) 00:10:47.160 8471.029 - 8528.266: 95.0499% ( 70) 00:10:47.160 8528.266 - 8585.502: 95.3765% ( 51) 00:10:47.160 8585.502 - 8642.739: 95.5879% ( 33) 00:10:47.160 8642.739 - 8699.976: 95.7351% ( 23) 00:10:47.160 8699.976 - 8757.212: 95.8888% ( 24) 00:10:47.160 8757.212 - 8814.449: 95.9977% ( 17) 00:10:47.160 8814.449 - 8871.686: 96.0938% ( 15) 00:10:47.160 8871.686 - 8928.922: 96.1578% ( 10) 00:10:47.160 8928.922 - 8986.159: 96.1706% ( 2) 00:10:47.160 8986.159 - 9043.396: 96.1962% ( 4) 00:10:47.160 9043.396 - 9100.632: 96.2218% ( 4) 00:10:47.160 9100.632 - 9157.869: 96.2731% ( 8) 00:10:47.160 9157.869 - 9215.106: 96.3051% ( 5) 00:10:47.160 9215.106 - 9272.342: 96.3499% ( 7) 00:10:47.160 9272.342 - 9329.579: 96.4011% ( 8) 00:10:47.160 9329.579 - 9386.816: 96.4331% ( 5) 00:10:47.160 9386.816 - 9444.052: 96.4588% ( 4) 00:10:47.160 9444.052 - 9501.289: 96.4844% ( 4) 00:10:47.160 9501.289 - 9558.526: 96.5100% ( 4) 00:10:47.160 9558.526 - 9615.762: 96.5292% ( 3) 00:10:47.160 9615.762 - 9672.999: 96.5548% ( 4) 00:10:47.160 9672.999 - 9730.236: 96.5804% ( 4) 00:10:47.160 9730.236 - 9787.472: 96.6060% ( 4) 00:10:47.160 9787.472 - 9844.709: 96.6253% ( 3) 00:10:47.160 9844.709 - 9901.946: 96.6509% ( 4) 00:10:47.160 9901.946 - 9959.183: 96.6765% ( 4) 00:10:47.160 9959.183 - 10016.419: 96.7021% ( 4) 00:10:47.160 10016.419 - 10073.656: 96.7213% ( 3) 00:10:47.160 10359.839 - 10417.076: 96.7341% ( 2) 00:10:47.160 10417.076 - 10474.313: 96.7597% ( 4) 00:10:47.160 10474.313 - 10531.549: 96.7853% ( 4) 00:10:47.160 10531.549 - 10588.786: 96.8366% ( 8) 00:10:47.160 10588.786 - 10646.023: 96.8750% ( 6) 00:10:47.160 10646.023 - 10703.259: 96.9198% ( 7) 00:10:47.160 10703.259 - 10760.496: 96.9454% ( 4) 00:10:47.160 10760.496 - 10817.733: 96.9839% ( 6) 00:10:47.160 10817.733 - 10874.969: 97.0415% ( 9) 00:10:47.160 10874.969 - 10932.206: 97.0991% ( 9) 00:10:47.160 10932.206 - 10989.443: 97.1632% ( 10) 00:10:47.160 10989.443 - 11046.679: 97.2208% ( 9) 00:10:47.160 11046.679 - 11103.916: 97.2784% ( 9) 00:10:47.160 11103.916 - 11161.153: 97.3297% ( 8) 00:10:47.160 11161.153 - 11218.390: 97.3873% ( 9) 00:10:47.160 11218.390 - 11275.626: 97.4385% ( 8) 00:10:47.160 11275.626 - 11332.863: 97.4769% ( 6) 00:10:47.160 11332.863 - 11390.100: 97.5090% ( 5) 00:10:47.160 11390.100 - 11447.336: 97.5346% ( 4) 00:10:47.160 11447.336 - 11504.573: 97.5666% ( 5) 00:10:47.160 11504.573 - 11561.810: 97.5986% ( 5) 00:10:47.160 11561.810 - 11619.046: 97.6306% ( 5) 00:10:47.160 11619.046 - 11676.283: 97.6562% ( 4) 00:10:47.160 11676.283 - 11733.520: 97.6883% ( 5) 00:10:47.160 11733.520 - 11790.756: 97.7203% ( 5) 00:10:47.160 11790.756 - 11847.993: 97.7523% ( 5) 00:10:47.160 11847.993 - 11905.230: 97.7779% ( 4) 00:10:47.160 11905.230 - 11962.466: 97.8035% ( 4) 00:10:47.160 11962.466 - 12019.703: 97.8227% ( 3) 00:10:47.160 12019.703 - 12076.940: 97.8291% ( 1) 00:10:47.160 12076.940 - 12134.176: 97.8420% ( 2) 00:10:47.160 12134.176 - 12191.413: 97.8548% ( 2) 00:10:47.160 12191.413 - 12248.650: 97.8612% ( 1) 00:10:47.160 12248.650 - 12305.886: 97.8740% ( 2) 00:10:47.160 12305.886 - 12363.123: 97.8868% ( 2) 00:10:47.160 12363.123 - 12420.360: 97.8932% ( 1) 00:10:47.160 12420.360 - 12477.597: 97.9060% ( 2) 00:10:47.160 12477.597 - 12534.833: 97.9124% ( 1) 00:10:47.160 12534.833 - 12592.070: 97.9252% ( 2) 00:10:47.160 12592.070 - 12649.307: 97.9572% ( 5) 00:10:47.160 12649.307 - 12706.543: 97.9764% ( 3) 00:10:47.160 12706.543 - 12763.780: 97.9892% ( 2) 00:10:47.160 12763.780 - 12821.017: 98.0020% ( 2) 00:10:47.160 12821.017 - 12878.253: 98.0085% ( 1) 00:10:47.160 12878.253 - 12935.490: 98.0277% ( 3) 00:10:47.160 12935.490 - 12992.727: 98.0469% ( 3) 00:10:47.160 12992.727 - 13049.963: 98.0597% ( 2) 00:10:47.160 13049.963 - 13107.200: 98.0725% ( 2) 00:10:47.160 13107.200 - 13164.437: 98.0853% ( 2) 00:10:47.160 13164.437 - 13221.673: 98.0981% ( 2) 00:10:47.160 13221.673 - 13278.910: 98.1045% ( 1) 00:10:47.160 13278.910 - 13336.147: 98.1237% ( 3) 00:10:47.160 13336.147 - 13393.383: 98.1365% ( 2) 00:10:47.160 13393.383 - 13450.620: 98.1493% ( 2) 00:10:47.160 13450.620 - 13507.857: 98.1621% ( 2) 00:10:47.160 13507.857 - 13565.093: 98.1749% ( 2) 00:10:47.160 13565.093 - 13622.330: 98.1942% ( 3) 00:10:47.160 13622.330 - 13679.567: 98.2070% ( 2) 00:10:47.160 13679.567 - 13736.803: 98.2198% ( 2) 00:10:47.160 13736.803 - 13794.040: 98.2262% ( 1) 00:10:47.160 13794.040 - 13851.277: 98.2646% ( 6) 00:10:47.160 13851.277 - 13908.514: 98.3030% ( 6) 00:10:47.160 13908.514 - 13965.750: 98.3350% ( 5) 00:10:47.160 13965.750 - 14022.987: 98.3671% ( 5) 00:10:47.160 14022.987 - 14080.224: 98.4055% ( 6) 00:10:47.160 14080.224 - 14137.460: 98.4375% ( 5) 00:10:47.160 14137.460 - 14194.697: 98.4823% ( 7) 00:10:47.160 14194.697 - 14251.934: 98.5143% ( 5) 00:10:47.160 14251.934 - 14309.170: 98.5528% ( 6) 00:10:47.160 14309.170 - 14366.407: 98.5784% ( 4) 00:10:47.160 14366.407 - 14423.644: 98.5976% ( 3) 00:10:47.160 14423.644 - 14480.880: 98.6168% ( 3) 00:10:47.160 14480.880 - 14538.117: 98.6360% ( 3) 00:10:47.160 14538.117 - 14595.354: 98.6552% ( 3) 00:10:47.160 14595.354 - 14652.590: 98.6744% ( 3) 00:10:47.160 14652.590 - 14767.064: 98.7193% ( 7) 00:10:47.160 14767.064 - 14881.537: 98.7641% ( 7) 00:10:47.160 14881.537 - 14996.010: 98.8217% ( 9) 00:10:47.161 14996.010 - 15110.484: 98.8601% ( 6) 00:10:47.161 15110.484 - 15224.957: 98.8986% ( 6) 00:10:47.161 15224.957 - 15339.431: 98.9498% ( 8) 00:10:47.161 15339.431 - 15453.904: 98.9946% ( 7) 00:10:47.161 15453.904 - 15568.377: 99.0394% ( 7) 00:10:47.161 15568.377 - 15682.851: 99.0843% ( 7) 00:10:47.161 15682.851 - 15797.324: 99.1291% ( 7) 00:10:47.161 15797.324 - 15911.797: 99.1739% ( 7) 00:10:47.161 15911.797 - 16026.271: 99.1803% ( 1) 00:10:47.161 34342.009 - 34570.955: 99.2252% ( 7) 00:10:47.161 34570.955 - 34799.902: 99.2700% ( 7) 00:10:47.161 34799.902 - 35028.849: 99.3148% ( 7) 00:10:47.161 35028.849 - 35257.796: 99.3532% ( 6) 00:10:47.161 35257.796 - 35486.742: 99.3981% ( 7) 00:10:47.161 35486.742 - 35715.689: 99.4429% ( 7) 00:10:47.161 35715.689 - 35944.636: 99.4877% ( 7) 00:10:47.161 35944.636 - 36173.583: 99.5325% ( 7) 00:10:47.161 36173.583 - 36402.529: 99.5838% ( 8) 00:10:47.161 36402.529 - 36631.476: 99.5902% ( 1) 00:10:47.161 40752.517 - 40981.464: 99.6158% ( 4) 00:10:47.161 40981.464 - 41210.410: 99.6542% ( 6) 00:10:47.161 41210.410 - 41439.357: 99.6926% ( 6) 00:10:47.161 41439.357 - 41668.304: 99.7374% ( 7) 00:10:47.161 41668.304 - 41897.251: 99.7823% ( 7) 00:10:47.161 41897.251 - 42126.197: 99.8271% ( 7) 00:10:47.161 42126.197 - 42355.144: 99.8783% ( 8) 00:10:47.161 42355.144 - 42584.091: 99.9232% ( 7) 00:10:47.161 42584.091 - 42813.038: 99.9744% ( 8) 00:10:47.161 42813.038 - 43041.984: 100.0000% ( 4) 00:10:47.161 00:10:47.161 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:47.161 ============================================================================== 00:10:47.161 Range in us Cumulative IO count 00:10:47.161 6868.402 - 6897.020: 0.0128% ( 2) 00:10:47.161 6897.020 - 6925.638: 0.0320% ( 3) 00:10:47.161 6925.638 - 6954.257: 0.0384% ( 1) 00:10:47.161 6954.257 - 6982.875: 0.0897% ( 8) 00:10:47.161 6982.875 - 7011.493: 0.1857% ( 15) 00:10:47.161 7011.493 - 7040.112: 0.3010% ( 18) 00:10:47.161 7040.112 - 7068.730: 0.4867% ( 29) 00:10:47.161 7068.730 - 7097.348: 0.7684% ( 44) 00:10:47.161 7097.348 - 7125.967: 1.0438% ( 43) 00:10:47.161 7125.967 - 7154.585: 1.4280% ( 60) 00:10:47.161 7154.585 - 7183.203: 1.8507% ( 66) 00:10:47.161 7183.203 - 7211.822: 2.5487% ( 109) 00:10:47.161 7211.822 - 7240.440: 3.2339% ( 107) 00:10:47.161 7240.440 - 7269.059: 4.1048% ( 136) 00:10:47.161 7269.059 - 7297.677: 5.0845% ( 153) 00:10:47.161 7297.677 - 7326.295: 6.0963% ( 158) 00:10:47.161 7326.295 - 7383.532: 9.2469% ( 492) 00:10:47.161 7383.532 - 7440.769: 12.9034% ( 571) 00:10:47.161 7440.769 - 7498.005: 17.6230% ( 737) 00:10:47.161 7498.005 - 7555.242: 23.0277% ( 844) 00:10:47.161 7555.242 - 7612.479: 28.5540% ( 863) 00:10:47.161 7612.479 - 7669.715: 34.2341% ( 887) 00:10:47.161 7669.715 - 7726.952: 40.0166% ( 903) 00:10:47.161 7726.952 - 7784.189: 45.8056% ( 904) 00:10:47.161 7784.189 - 7841.425: 51.5369% ( 895) 00:10:47.161 7841.425 - 7898.662: 57.4091% ( 917) 00:10:47.161 7898.662 - 7955.899: 63.0123% ( 875) 00:10:47.161 7955.899 - 8013.135: 68.5771% ( 869) 00:10:47.161 8013.135 - 8070.372: 73.7513% ( 808) 00:10:47.161 8070.372 - 8127.609: 78.5476% ( 749) 00:10:47.161 8127.609 - 8184.845: 82.8637% ( 674) 00:10:47.161 8184.845 - 8242.082: 86.7123% ( 601) 00:10:47.161 8242.082 - 8299.319: 89.7669% ( 477) 00:10:47.161 8299.319 - 8356.555: 92.0658% ( 359) 00:10:47.161 8356.555 - 8413.792: 93.6796% ( 252) 00:10:47.161 8413.792 - 8471.029: 94.5761% ( 140) 00:10:47.161 8471.029 - 8528.266: 95.0564% ( 75) 00:10:47.161 8528.266 - 8585.502: 95.4086% ( 55) 00:10:47.161 8585.502 - 8642.739: 95.6199% ( 33) 00:10:47.161 8642.739 - 8699.976: 95.7864% ( 26) 00:10:47.161 8699.976 - 8757.212: 95.9080% ( 19) 00:10:47.161 8757.212 - 8814.449: 96.0041% ( 15) 00:10:47.161 8814.449 - 8871.686: 96.1066% ( 16) 00:10:47.161 8871.686 - 8928.922: 96.1514% ( 7) 00:10:47.161 8928.922 - 8986.159: 96.2090% ( 9) 00:10:47.161 8986.159 - 9043.396: 96.2538% ( 7) 00:10:47.161 9043.396 - 9100.632: 96.3051% ( 8) 00:10:47.161 9100.632 - 9157.869: 96.3563% ( 8) 00:10:47.161 9157.869 - 9215.106: 96.4011% ( 7) 00:10:47.161 9215.106 - 9272.342: 96.4460% ( 7) 00:10:47.161 9272.342 - 9329.579: 96.4780% ( 5) 00:10:47.161 9329.579 - 9386.816: 96.4972% ( 3) 00:10:47.161 9386.816 - 9444.052: 96.5228% ( 4) 00:10:47.161 9444.052 - 9501.289: 96.5484% ( 4) 00:10:47.161 9501.289 - 9558.526: 96.5676% ( 3) 00:10:47.161 9558.526 - 9615.762: 96.5868% ( 3) 00:10:47.161 9615.762 - 9672.999: 96.6124% ( 4) 00:10:47.161 9672.999 - 9730.236: 96.6317% ( 3) 00:10:47.161 9730.236 - 9787.472: 96.6573% ( 4) 00:10:47.161 9787.472 - 9844.709: 96.6829% ( 4) 00:10:47.161 9844.709 - 9901.946: 96.7085% ( 4) 00:10:47.161 9901.946 - 9959.183: 96.7213% ( 2) 00:10:47.161 10474.313 - 10531.549: 96.7277% ( 1) 00:10:47.161 10531.549 - 10588.786: 96.7469% ( 3) 00:10:47.161 10588.786 - 10646.023: 96.7661% ( 3) 00:10:47.161 10646.023 - 10703.259: 96.7853% ( 3) 00:10:47.161 10703.259 - 10760.496: 96.8110% ( 4) 00:10:47.161 10760.496 - 10817.733: 96.8302% ( 3) 00:10:47.161 10817.733 - 10874.969: 96.8494% ( 3) 00:10:47.161 10874.969 - 10932.206: 96.8686% ( 3) 00:10:47.161 10932.206 - 10989.443: 96.8878% ( 3) 00:10:47.161 10989.443 - 11046.679: 96.9070% ( 3) 00:10:47.161 11046.679 - 11103.916: 96.9326% ( 4) 00:10:47.161 11103.916 - 11161.153: 96.9903% ( 9) 00:10:47.161 11161.153 - 11218.390: 97.0671% ( 12) 00:10:47.161 11218.390 - 11275.626: 97.1055% ( 6) 00:10:47.161 11275.626 - 11332.863: 97.1568% ( 8) 00:10:47.161 11332.863 - 11390.100: 97.1888% ( 5) 00:10:47.161 11390.100 - 11447.336: 97.2592% ( 11) 00:10:47.161 11447.336 - 11504.573: 97.3233% ( 10) 00:10:47.161 11504.573 - 11561.810: 97.3745% ( 8) 00:10:47.161 11561.810 - 11619.046: 97.4321% ( 9) 00:10:47.161 11619.046 - 11676.283: 97.4834% ( 8) 00:10:47.161 11676.283 - 11733.520: 97.5346% ( 8) 00:10:47.161 11733.520 - 11790.756: 97.5794% ( 7) 00:10:47.161 11790.756 - 11847.993: 97.6114% ( 5) 00:10:47.161 11847.993 - 11905.230: 97.6498% ( 6) 00:10:47.161 11905.230 - 11962.466: 97.6819% ( 5) 00:10:47.161 11962.466 - 12019.703: 97.6883% ( 1) 00:10:47.161 12019.703 - 12076.940: 97.7011% ( 2) 00:10:47.161 12076.940 - 12134.176: 97.7139% ( 2) 00:10:47.161 12134.176 - 12191.413: 97.7267% ( 2) 00:10:47.161 12191.413 - 12248.650: 97.7395% ( 2) 00:10:47.161 12248.650 - 12305.886: 97.7459% ( 1) 00:10:47.161 12305.886 - 12363.123: 97.7587% ( 2) 00:10:47.161 12363.123 - 12420.360: 97.7715% ( 2) 00:10:47.161 12420.360 - 12477.597: 97.7843% ( 2) 00:10:47.161 12477.597 - 12534.833: 97.7971% ( 2) 00:10:47.161 12534.833 - 12592.070: 97.8099% ( 2) 00:10:47.161 12592.070 - 12649.307: 97.8163% ( 1) 00:10:47.161 12649.307 - 12706.543: 97.8291% ( 2) 00:10:47.161 12706.543 - 12763.780: 97.8420% ( 2) 00:10:47.161 12763.780 - 12821.017: 97.8484% ( 1) 00:10:47.161 12821.017 - 12878.253: 97.8612% ( 2) 00:10:47.161 12878.253 - 12935.490: 97.8676% ( 1) 00:10:47.161 12935.490 - 12992.727: 97.8996% ( 5) 00:10:47.161 12992.727 - 13049.963: 97.9252% ( 4) 00:10:47.161 13049.963 - 13107.200: 97.9508% ( 4) 00:10:47.161 13107.200 - 13164.437: 97.9764% ( 4) 00:10:47.161 13164.437 - 13221.673: 98.0020% ( 4) 00:10:47.161 13221.673 - 13278.910: 98.0277% ( 4) 00:10:47.161 13278.910 - 13336.147: 98.0469% ( 3) 00:10:47.161 13336.147 - 13393.383: 98.0661% ( 3) 00:10:47.161 13393.383 - 13450.620: 98.0789% ( 2) 00:10:47.161 13450.620 - 13507.857: 98.1109% ( 5) 00:10:47.161 13507.857 - 13565.093: 98.1493% ( 6) 00:10:47.161 13565.093 - 13622.330: 98.1749% ( 4) 00:10:47.161 13622.330 - 13679.567: 98.2198% ( 7) 00:10:47.161 13679.567 - 13736.803: 98.2582% ( 6) 00:10:47.161 13736.803 - 13794.040: 98.2966% ( 6) 00:10:47.161 13794.040 - 13851.277: 98.3286% ( 5) 00:10:47.161 13851.277 - 13908.514: 98.3671% ( 6) 00:10:47.161 13908.514 - 13965.750: 98.4055% ( 6) 00:10:47.161 13965.750 - 14022.987: 98.4375% ( 5) 00:10:47.161 14022.987 - 14080.224: 98.4759% ( 6) 00:10:47.161 14080.224 - 14137.460: 98.5143% ( 6) 00:10:47.161 14137.460 - 14194.697: 98.5400% ( 4) 00:10:47.161 14194.697 - 14251.934: 98.5848% ( 7) 00:10:47.161 14251.934 - 14309.170: 98.6232% ( 6) 00:10:47.161 14309.170 - 14366.407: 98.6552% ( 5) 00:10:47.161 14366.407 - 14423.644: 98.6936% ( 6) 00:10:47.161 14423.644 - 14480.880: 98.7257% ( 5) 00:10:47.161 14480.880 - 14538.117: 98.7577% ( 5) 00:10:47.161 14538.117 - 14595.354: 98.7705% ( 2) 00:10:47.161 15224.957 - 15339.431: 98.7961% ( 4) 00:10:47.161 15339.431 - 15453.904: 98.8409% ( 7) 00:10:47.161 15453.904 - 15568.377: 98.8794% ( 6) 00:10:47.161 15568.377 - 15682.851: 98.9178% ( 6) 00:10:47.161 15682.851 - 15797.324: 98.9626% ( 7) 00:10:47.161 15797.324 - 15911.797: 99.0074% ( 7) 00:10:47.161 15911.797 - 16026.271: 99.0523% ( 7) 00:10:47.161 16026.271 - 16140.744: 99.0971% ( 7) 00:10:47.161 16140.744 - 16255.217: 99.1355% ( 6) 00:10:47.161 16255.217 - 16369.691: 99.1803% ( 7) 00:10:47.161 31823.595 - 32052.541: 99.1931% ( 2) 00:10:47.161 32052.541 - 32281.488: 99.2444% ( 8) 00:10:47.161 32281.488 - 32510.435: 99.2956% ( 8) 00:10:47.161 32510.435 - 32739.382: 99.3404% ( 7) 00:10:47.161 32739.382 - 32968.328: 99.3788% ( 6) 00:10:47.161 32968.328 - 33197.275: 99.4237% ( 7) 00:10:47.161 33197.275 - 33426.222: 99.4685% ( 7) 00:10:47.161 33426.222 - 33655.169: 99.5133% ( 7) 00:10:47.161 33655.169 - 33884.115: 99.5581% ( 7) 00:10:47.161 33884.115 - 34113.062: 99.5902% ( 5) 00:10:47.161 38234.103 - 38463.050: 99.6030% ( 2) 00:10:47.161 38463.050 - 38691.997: 99.6478% ( 7) 00:10:47.161 38691.997 - 38920.943: 99.6990% ( 8) 00:10:47.161 38920.943 - 39149.890: 99.7503% ( 8) 00:10:47.161 39149.890 - 39378.837: 99.7951% ( 7) 00:10:47.161 39378.837 - 39607.783: 99.8463% ( 8) 00:10:47.161 39607.783 - 39836.730: 99.8911% ( 7) 00:10:47.161 39836.730 - 40065.677: 99.9360% ( 7) 00:10:47.161 40065.677 - 40294.624: 99.9872% ( 8) 00:10:47.161 40294.624 - 40523.570: 100.0000% ( 2) 00:10:47.162 00:10:47.162 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:47.162 ============================================================================== 00:10:47.162 Range in us Cumulative IO count 00:10:47.162 6897.020 - 6925.638: 0.0192% ( 3) 00:10:47.162 6925.638 - 6954.257: 0.0384% ( 3) 00:10:47.162 6954.257 - 6982.875: 0.0768% ( 6) 00:10:47.162 6982.875 - 7011.493: 0.1409% ( 10) 00:10:47.162 7011.493 - 7040.112: 0.2882% ( 23) 00:10:47.162 7040.112 - 7068.730: 0.4995% ( 33) 00:10:47.162 7068.730 - 7097.348: 0.7684% ( 42) 00:10:47.162 7097.348 - 7125.967: 1.0374% ( 42) 00:10:47.162 7125.967 - 7154.585: 1.4344% ( 62) 00:10:47.162 7154.585 - 7183.203: 1.9467% ( 80) 00:10:47.162 7183.203 - 7211.822: 2.5871% ( 100) 00:10:47.162 7211.822 - 7240.440: 3.2339% ( 101) 00:10:47.162 7240.440 - 7269.059: 3.9959% ( 119) 00:10:47.162 7269.059 - 7297.677: 4.9052% ( 142) 00:10:47.162 7297.677 - 7326.295: 6.0003% ( 171) 00:10:47.162 7326.295 - 7383.532: 9.0292% ( 473) 00:10:47.162 7383.532 - 7440.769: 12.8714% ( 600) 00:10:47.162 7440.769 - 7498.005: 17.5589% ( 732) 00:10:47.162 7498.005 - 7555.242: 23.0917% ( 864) 00:10:47.162 7555.242 - 7612.479: 28.6629% ( 870) 00:10:47.162 7612.479 - 7669.715: 34.3558% ( 889) 00:10:47.162 7669.715 - 7726.952: 40.1127% ( 899) 00:10:47.162 7726.952 - 7784.189: 45.9016% ( 904) 00:10:47.162 7784.189 - 7841.425: 51.6778% ( 902) 00:10:47.162 7841.425 - 7898.662: 57.4987% ( 909) 00:10:47.162 7898.662 - 7955.899: 63.1212% ( 878) 00:10:47.162 7955.899 - 8013.135: 68.6091% ( 857) 00:10:47.162 8013.135 - 8070.372: 73.8986% ( 826) 00:10:47.162 8070.372 - 8127.609: 78.6117% ( 736) 00:10:47.162 8127.609 - 8184.845: 83.0494% ( 693) 00:10:47.162 8184.845 - 8242.082: 86.8596% ( 595) 00:10:47.162 8242.082 - 8299.319: 89.8758% ( 471) 00:10:47.162 8299.319 - 8356.555: 92.1875% ( 361) 00:10:47.162 8356.555 - 8413.792: 93.6860% ( 234) 00:10:47.162 8413.792 - 8471.029: 94.5377% ( 133) 00:10:47.162 8471.029 - 8528.266: 95.0371% ( 78) 00:10:47.162 8528.266 - 8585.502: 95.3765% ( 53) 00:10:47.162 8585.502 - 8642.739: 95.6327% ( 40) 00:10:47.162 8642.739 - 8699.976: 95.7800% ( 23) 00:10:47.162 8699.976 - 8757.212: 95.9016% ( 19) 00:10:47.162 8757.212 - 8814.449: 96.0297% ( 20) 00:10:47.162 8814.449 - 8871.686: 96.1642% ( 21) 00:10:47.162 8871.686 - 8928.922: 96.2474% ( 13) 00:10:47.162 8928.922 - 8986.159: 96.2987% ( 8) 00:10:47.162 8986.159 - 9043.396: 96.3499% ( 8) 00:10:47.162 9043.396 - 9100.632: 96.3947% ( 7) 00:10:47.162 9100.632 - 9157.869: 96.4460% ( 8) 00:10:47.162 9157.869 - 9215.106: 96.4716% ( 4) 00:10:47.162 9215.106 - 9272.342: 96.4972% ( 4) 00:10:47.162 9272.342 - 9329.579: 96.5164% ( 3) 00:10:47.162 9329.579 - 9386.816: 96.5420% ( 4) 00:10:47.162 9386.816 - 9444.052: 96.5612% ( 3) 00:10:47.162 9444.052 - 9501.289: 96.5868% ( 4) 00:10:47.162 9501.289 - 9558.526: 96.6124% ( 4) 00:10:47.162 9558.526 - 9615.762: 96.6317% ( 3) 00:10:47.162 9615.762 - 9672.999: 96.6573% ( 4) 00:10:47.162 9672.999 - 9730.236: 96.6829% ( 4) 00:10:47.162 9730.236 - 9787.472: 96.7085% ( 4) 00:10:47.162 9787.472 - 9844.709: 96.7213% ( 2) 00:10:47.162 10359.839 - 10417.076: 96.7341% ( 2) 00:10:47.162 10417.076 - 10474.313: 96.7533% ( 3) 00:10:47.162 10474.313 - 10531.549: 96.7725% ( 3) 00:10:47.162 10531.549 - 10588.786: 96.7982% ( 4) 00:10:47.162 10588.786 - 10646.023: 96.8174% ( 3) 00:10:47.162 10646.023 - 10703.259: 96.8366% ( 3) 00:10:47.162 10703.259 - 10760.496: 96.8558% ( 3) 00:10:47.162 10760.496 - 10817.733: 96.8750% ( 3) 00:10:47.162 10817.733 - 10874.969: 96.8878% ( 2) 00:10:47.162 10874.969 - 10932.206: 96.9070% ( 3) 00:10:47.162 10932.206 - 10989.443: 96.9262% ( 3) 00:10:47.162 10989.443 - 11046.679: 96.9454% ( 3) 00:10:47.162 11046.679 - 11103.916: 96.9647% ( 3) 00:10:47.162 11103.916 - 11161.153: 96.9839% ( 3) 00:10:47.162 11161.153 - 11218.390: 97.0031% ( 3) 00:10:47.162 11218.390 - 11275.626: 97.0223% ( 3) 00:10:47.162 11275.626 - 11332.863: 97.0415% ( 3) 00:10:47.162 11332.863 - 11390.100: 97.0543% ( 2) 00:10:47.162 11390.100 - 11447.336: 97.0799% ( 4) 00:10:47.162 11447.336 - 11504.573: 97.0991% ( 3) 00:10:47.162 11504.573 - 11561.810: 97.1119% ( 2) 00:10:47.162 11561.810 - 11619.046: 97.1376% ( 4) 00:10:47.162 11619.046 - 11676.283: 97.1696% ( 5) 00:10:47.162 11676.283 - 11733.520: 97.1888% ( 3) 00:10:47.162 11733.520 - 11790.756: 97.2144% ( 4) 00:10:47.162 11790.756 - 11847.993: 97.2464% ( 5) 00:10:47.162 11847.993 - 11905.230: 97.2848% ( 6) 00:10:47.162 11905.230 - 11962.466: 97.3233% ( 6) 00:10:47.162 11962.466 - 12019.703: 97.3553% ( 5) 00:10:47.162 12019.703 - 12076.940: 97.3937% ( 6) 00:10:47.162 12076.940 - 12134.176: 97.4257% ( 5) 00:10:47.162 12134.176 - 12191.413: 97.4641% ( 6) 00:10:47.162 12191.413 - 12248.650: 97.5090% ( 7) 00:10:47.162 12248.650 - 12305.886: 97.5474% ( 6) 00:10:47.162 12305.886 - 12363.123: 97.5922% ( 7) 00:10:47.162 12363.123 - 12420.360: 97.6242% ( 5) 00:10:47.162 12420.360 - 12477.597: 97.6627% ( 6) 00:10:47.162 12477.597 - 12534.833: 97.7011% ( 6) 00:10:47.162 12534.833 - 12592.070: 97.7139% ( 2) 00:10:47.162 12592.070 - 12649.307: 97.7267% ( 2) 00:10:47.162 12649.307 - 12706.543: 97.7395% ( 2) 00:10:47.162 12706.543 - 12763.780: 97.7523% ( 2) 00:10:47.162 12763.780 - 12821.017: 97.7715% ( 3) 00:10:47.162 12821.017 - 12878.253: 97.7843% ( 2) 00:10:47.162 12878.253 - 12935.490: 97.7971% ( 2) 00:10:47.162 12935.490 - 12992.727: 97.8163% ( 3) 00:10:47.162 12992.727 - 13049.963: 97.8291% ( 2) 00:10:47.162 13049.963 - 13107.200: 97.8420% ( 2) 00:10:47.162 13107.200 - 13164.437: 97.8676% ( 4) 00:10:47.162 13164.437 - 13221.673: 97.9060% ( 6) 00:10:47.162 13221.673 - 13278.910: 97.9444% ( 6) 00:10:47.162 13278.910 - 13336.147: 97.9956% ( 8) 00:10:47.162 13336.147 - 13393.383: 98.0469% ( 8) 00:10:47.162 13393.383 - 13450.620: 98.0981% ( 8) 00:10:47.162 13450.620 - 13507.857: 98.1493% ( 8) 00:10:47.162 13507.857 - 13565.093: 98.1878% ( 6) 00:10:47.162 13565.093 - 13622.330: 98.2262% ( 6) 00:10:47.162 13622.330 - 13679.567: 98.2646% ( 6) 00:10:47.162 13679.567 - 13736.803: 98.3030% ( 6) 00:10:47.162 13736.803 - 13794.040: 98.3286% ( 4) 00:10:47.162 13794.040 - 13851.277: 98.3671% ( 6) 00:10:47.162 13851.277 - 13908.514: 98.3991% ( 5) 00:10:47.162 13908.514 - 13965.750: 98.4375% ( 6) 00:10:47.162 13965.750 - 14022.987: 98.4759% ( 6) 00:10:47.162 14022.987 - 14080.224: 98.5079% ( 5) 00:10:47.162 14080.224 - 14137.460: 98.5464% ( 6) 00:10:47.162 14137.460 - 14194.697: 98.5848% ( 6) 00:10:47.162 14194.697 - 14251.934: 98.5976% ( 2) 00:10:47.162 14251.934 - 14309.170: 98.6104% ( 2) 00:10:47.162 14309.170 - 14366.407: 98.6232% ( 2) 00:10:47.162 14366.407 - 14423.644: 98.6360% ( 2) 00:10:47.162 14423.644 - 14480.880: 98.6488% ( 2) 00:10:47.162 14480.880 - 14538.117: 98.6616% ( 2) 00:10:47.162 14538.117 - 14595.354: 98.6744% ( 2) 00:10:47.162 14595.354 - 14652.590: 98.6936% ( 3) 00:10:47.162 14652.590 - 14767.064: 98.7193% ( 4) 00:10:47.162 14767.064 - 14881.537: 98.7449% ( 4) 00:10:47.162 14881.537 - 14996.010: 98.7705% ( 4) 00:10:47.162 15453.904 - 15568.377: 98.7769% ( 1) 00:10:47.162 15568.377 - 15682.851: 98.8153% ( 6) 00:10:47.162 15682.851 - 15797.324: 98.8601% ( 7) 00:10:47.162 15797.324 - 15911.797: 98.9050% ( 7) 00:10:47.162 15911.797 - 16026.271: 98.9434% ( 6) 00:10:47.162 16026.271 - 16140.744: 98.9882% ( 7) 00:10:47.162 16140.744 - 16255.217: 99.0266% ( 6) 00:10:47.162 16255.217 - 16369.691: 99.0715% ( 7) 00:10:47.162 16369.691 - 16484.164: 99.1099% ( 6) 00:10:47.162 16484.164 - 16598.638: 99.1611% ( 8) 00:10:47.162 16598.638 - 16713.111: 99.1803% ( 3) 00:10:47.162 29763.074 - 29992.021: 99.2252% ( 7) 00:10:47.162 29992.021 - 30220.968: 99.2700% ( 7) 00:10:47.162 30220.968 - 30449.914: 99.3148% ( 7) 00:10:47.162 30449.914 - 30678.861: 99.3532% ( 6) 00:10:47.162 30678.861 - 30907.808: 99.4109% ( 9) 00:10:47.162 30907.808 - 31136.755: 99.4621% ( 8) 00:10:47.162 31136.755 - 31365.701: 99.5069% ( 7) 00:10:47.162 31365.701 - 31594.648: 99.5517% ( 7) 00:10:47.162 31594.648 - 31823.595: 99.5902% ( 6) 00:10:47.162 35944.636 - 36173.583: 99.6286% ( 6) 00:10:47.163 36173.583 - 36402.529: 99.6798% ( 8) 00:10:47.163 36402.529 - 36631.476: 99.7246% ( 7) 00:10:47.163 36631.476 - 36860.423: 99.7695% ( 7) 00:10:47.163 36860.423 - 37089.369: 99.8271% ( 9) 00:10:47.163 37089.369 - 37318.316: 99.8783% ( 8) 00:10:47.163 37318.316 - 37547.263: 99.9232% ( 7) 00:10:47.163 37547.263 - 37776.210: 99.9680% ( 7) 00:10:47.163 37776.210 - 38005.156: 100.0000% ( 5) 00:10:47.163 00:10:47.163 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:47.163 ============================================================================== 00:10:47.163 Range in us Cumulative IO count 00:10:47.163 6897.020 - 6925.638: 0.0191% ( 3) 00:10:47.163 6925.638 - 6954.257: 0.0510% ( 5) 00:10:47.163 6954.257 - 6982.875: 0.0957% ( 7) 00:10:47.163 6982.875 - 7011.493: 0.1786% ( 13) 00:10:47.163 7011.493 - 7040.112: 0.2870% ( 17) 00:10:47.163 7040.112 - 7068.730: 0.4847% ( 31) 00:10:47.163 7068.730 - 7097.348: 0.7589% ( 43) 00:10:47.163 7097.348 - 7125.967: 1.0459% ( 45) 00:10:47.163 7125.967 - 7154.585: 1.4413% ( 62) 00:10:47.163 7154.585 - 7183.203: 1.9260% ( 76) 00:10:47.163 7183.203 - 7211.822: 2.5191% ( 93) 00:10:47.163 7211.822 - 7240.440: 3.1569% ( 100) 00:10:47.163 7240.440 - 7269.059: 3.9349% ( 122) 00:10:47.163 7269.059 - 7297.677: 5.0000% ( 167) 00:10:47.163 7297.677 - 7326.295: 6.1288% ( 177) 00:10:47.163 7326.295 - 7383.532: 9.0242% ( 454) 00:10:47.163 7383.532 - 7440.769: 12.8699% ( 603) 00:10:47.163 7440.769 - 7498.005: 17.4490% ( 718) 00:10:47.163 7498.005 - 7555.242: 22.8061% ( 840) 00:10:47.163 7555.242 - 7612.479: 28.1122% ( 832) 00:10:47.163 7612.479 - 7669.715: 33.9031% ( 908) 00:10:47.163 7669.715 - 7726.952: 39.6301% ( 898) 00:10:47.163 7726.952 - 7784.189: 45.5166% ( 923) 00:10:47.163 7784.189 - 7841.425: 51.3457% ( 914) 00:10:47.163 7841.425 - 7898.662: 57.1301% ( 907) 00:10:47.163 7898.662 - 7955.899: 62.7997% ( 889) 00:10:47.163 7955.899 - 8013.135: 68.1505% ( 839) 00:10:47.163 8013.135 - 8070.372: 73.2462% ( 799) 00:10:47.163 8070.372 - 8127.609: 78.1186% ( 764) 00:10:47.163 8127.609 - 8184.845: 82.5765% ( 699) 00:10:47.163 8184.845 - 8242.082: 86.3839% ( 597) 00:10:47.163 8242.082 - 8299.319: 89.5599% ( 498) 00:10:47.163 8299.319 - 8356.555: 91.8686% ( 362) 00:10:47.163 8356.555 - 8413.792: 93.3546% ( 233) 00:10:47.163 8413.792 - 8471.029: 94.2921% ( 147) 00:10:47.163 8471.029 - 8528.266: 94.8023% ( 80) 00:10:47.163 8528.266 - 8585.502: 95.1084% ( 48) 00:10:47.163 8585.502 - 8642.739: 95.3189% ( 33) 00:10:47.163 8642.739 - 8699.976: 95.4656% ( 23) 00:10:47.163 8699.976 - 8757.212: 95.5740% ( 17) 00:10:47.163 8757.212 - 8814.449: 95.7207% ( 23) 00:10:47.163 8814.449 - 8871.686: 95.8163% ( 15) 00:10:47.163 8871.686 - 8928.922: 95.8929% ( 12) 00:10:47.163 8928.922 - 8986.159: 95.9439% ( 8) 00:10:47.163 8986.159 - 9043.396: 95.9885% ( 7) 00:10:47.163 9043.396 - 9100.632: 96.0395% ( 8) 00:10:47.163 9100.632 - 9157.869: 96.0906% ( 8) 00:10:47.163 9157.869 - 9215.106: 96.1224% ( 5) 00:10:47.163 9215.106 - 9272.342: 96.1480% ( 4) 00:10:47.163 9272.342 - 9329.579: 96.1671% ( 3) 00:10:47.163 9329.579 - 9386.816: 96.1926% ( 4) 00:10:47.163 9386.816 - 9444.052: 96.2181% ( 4) 00:10:47.163 9444.052 - 9501.289: 96.2372% ( 3) 00:10:47.163 9501.289 - 9558.526: 96.2628% ( 4) 00:10:47.163 9558.526 - 9615.762: 96.2946% ( 5) 00:10:47.163 9615.762 - 9672.999: 96.3138% ( 3) 00:10:47.163 9672.999 - 9730.236: 96.3265% ( 2) 00:10:47.163 9730.236 - 9787.472: 96.3393% ( 2) 00:10:47.163 9787.472 - 9844.709: 96.3648% ( 4) 00:10:47.163 9844.709 - 9901.946: 96.3967% ( 5) 00:10:47.163 9901.946 - 9959.183: 96.4222% ( 4) 00:10:47.163 9959.183 - 10016.419: 96.4477% ( 4) 00:10:47.163 10016.419 - 10073.656: 96.4796% ( 5) 00:10:47.163 10073.656 - 10130.893: 96.5051% ( 4) 00:10:47.163 10130.893 - 10188.129: 96.5434% ( 6) 00:10:47.163 10188.129 - 10245.366: 96.5816% ( 6) 00:10:47.163 10245.366 - 10302.603: 96.6263% ( 7) 00:10:47.163 10302.603 - 10359.839: 96.6709% ( 7) 00:10:47.163 10359.839 - 10417.076: 96.7283% ( 9) 00:10:47.163 10417.076 - 10474.313: 96.7730% ( 7) 00:10:47.163 10474.313 - 10531.549: 96.8240% ( 8) 00:10:47.163 10531.549 - 10588.786: 96.8686% ( 7) 00:10:47.163 10588.786 - 10646.023: 96.9005% ( 5) 00:10:47.163 10646.023 - 10703.259: 96.9196% ( 3) 00:10:47.163 10703.259 - 10760.496: 96.9388% ( 3) 00:10:47.163 10760.496 - 10817.733: 96.9579% ( 3) 00:10:47.163 10817.733 - 10874.969: 96.9770% ( 3) 00:10:47.163 10874.969 - 10932.206: 96.9962% ( 3) 00:10:47.163 10932.206 - 10989.443: 97.0153% ( 3) 00:10:47.163 10989.443 - 11046.679: 97.0344% ( 3) 00:10:47.163 11046.679 - 11103.916: 97.0536% ( 3) 00:10:47.163 11103.916 - 11161.153: 97.0727% ( 3) 00:10:47.163 11161.153 - 11218.390: 97.0855% ( 2) 00:10:47.163 11218.390 - 11275.626: 97.1046% ( 3) 00:10:47.163 11275.626 - 11332.863: 97.1173% ( 2) 00:10:47.163 11332.863 - 11390.100: 97.1365% ( 3) 00:10:47.163 11390.100 - 11447.336: 97.1429% ( 1) 00:10:47.163 12019.703 - 12076.940: 97.1556% ( 2) 00:10:47.163 12076.940 - 12134.176: 97.1684% ( 2) 00:10:47.163 12134.176 - 12191.413: 97.1811% ( 2) 00:10:47.163 12191.413 - 12248.650: 97.1875% ( 1) 00:10:47.163 12248.650 - 12305.886: 97.2066% ( 3) 00:10:47.163 12305.886 - 12363.123: 97.2321% ( 4) 00:10:47.163 12363.123 - 12420.360: 97.3023% ( 11) 00:10:47.163 12420.360 - 12477.597: 97.3214% ( 3) 00:10:47.163 12477.597 - 12534.833: 97.3597% ( 6) 00:10:47.163 12534.833 - 12592.070: 97.4043% ( 7) 00:10:47.163 12592.070 - 12649.307: 97.4426% ( 6) 00:10:47.163 12649.307 - 12706.543: 97.4809% ( 6) 00:10:47.163 12706.543 - 12763.780: 97.5255% ( 7) 00:10:47.163 12763.780 - 12821.017: 97.5765% ( 8) 00:10:47.163 12821.017 - 12878.253: 97.6531% ( 12) 00:10:47.163 12878.253 - 12935.490: 97.7105% ( 9) 00:10:47.163 12935.490 - 12992.727: 97.7742% ( 10) 00:10:47.163 12992.727 - 13049.963: 97.8380% ( 10) 00:10:47.163 13049.963 - 13107.200: 97.8890% ( 8) 00:10:47.163 13107.200 - 13164.437: 97.9592% ( 11) 00:10:47.163 13164.437 - 13221.673: 98.0102% ( 8) 00:10:47.163 13221.673 - 13278.910: 98.0485% ( 6) 00:10:47.163 13278.910 - 13336.147: 98.0867% ( 6) 00:10:47.163 13336.147 - 13393.383: 98.1250% ( 6) 00:10:47.163 13393.383 - 13450.620: 98.1633% ( 6) 00:10:47.163 13450.620 - 13507.857: 98.1888% ( 4) 00:10:47.163 13507.857 - 13565.093: 98.2334% ( 7) 00:10:47.163 13565.093 - 13622.330: 98.2653% ( 5) 00:10:47.163 13622.330 - 13679.567: 98.3036% ( 6) 00:10:47.163 13679.567 - 13736.803: 98.3355% ( 5) 00:10:47.163 13736.803 - 13794.040: 98.3673% ( 5) 00:10:47.163 13794.040 - 13851.277: 98.3929% ( 4) 00:10:47.163 13851.277 - 13908.514: 98.4056% ( 2) 00:10:47.163 13908.514 - 13965.750: 98.4184% ( 2) 00:10:47.163 13965.750 - 14022.987: 98.4311% ( 2) 00:10:47.163 14022.987 - 14080.224: 98.4439% ( 2) 00:10:47.163 14080.224 - 14137.460: 98.4566% ( 2) 00:10:47.163 14137.460 - 14194.697: 98.4694% ( 2) 00:10:47.163 14194.697 - 14251.934: 98.4821% ( 2) 00:10:47.163 14251.934 - 14309.170: 98.4949% ( 2) 00:10:47.163 14309.170 - 14366.407: 98.5140% ( 3) 00:10:47.163 14366.407 - 14423.644: 98.5332% ( 3) 00:10:47.163 14423.644 - 14480.880: 98.5459% ( 2) 00:10:47.163 14480.880 - 14538.117: 98.5587% ( 2) 00:10:47.163 14538.117 - 14595.354: 98.5714% ( 2) 00:10:47.163 14595.354 - 14652.590: 98.5842% ( 2) 00:10:47.163 14652.590 - 14767.064: 98.6033% ( 3) 00:10:47.163 14767.064 - 14881.537: 98.6352% ( 5) 00:10:47.163 14881.537 - 14996.010: 98.6671% ( 5) 00:10:47.163 14996.010 - 15110.484: 98.6926% ( 4) 00:10:47.163 15110.484 - 15224.957: 98.7245% ( 5) 00:10:47.163 15224.957 - 15339.431: 98.7500% ( 4) 00:10:47.163 15339.431 - 15453.904: 98.7755% ( 4) 00:10:47.163 15453.904 - 15568.377: 98.8010% ( 4) 00:10:47.163 15568.377 - 15682.851: 98.8329% ( 5) 00:10:47.163 15682.851 - 15797.324: 98.8776% ( 7) 00:10:47.163 15797.324 - 15911.797: 98.9158% ( 6) 00:10:47.163 15911.797 - 16026.271: 98.9605% ( 7) 00:10:47.163 16026.271 - 16140.744: 98.9987% ( 6) 00:10:47.163 16140.744 - 16255.217: 99.0497% ( 8) 00:10:47.163 16255.217 - 16369.691: 99.0944% ( 7) 00:10:47.163 16369.691 - 16484.164: 99.1390% ( 7) 00:10:47.163 16484.164 - 16598.638: 99.1837% ( 7) 00:10:47.163 21978.886 - 22093.359: 99.1964% ( 2) 00:10:47.163 22093.359 - 22207.832: 99.2156% ( 3) 00:10:47.163 22207.832 - 22322.306: 99.2474% ( 5) 00:10:47.163 22322.306 - 22436.779: 99.2730% ( 4) 00:10:47.163 22436.779 - 22551.252: 99.2985% ( 4) 00:10:47.163 22551.252 - 22665.726: 99.3240% ( 4) 00:10:47.163 22665.726 - 22780.199: 99.3495% ( 4) 00:10:47.163 22780.199 - 22894.672: 99.3750% ( 4) 00:10:47.163 22894.672 - 23009.146: 99.4005% ( 4) 00:10:47.163 23009.146 - 23123.619: 99.4260% ( 4) 00:10:47.163 23123.619 - 23238.093: 99.4515% ( 4) 00:10:47.163 23238.093 - 23352.566: 99.4707% ( 3) 00:10:47.163 23352.566 - 23467.039: 99.4962% ( 4) 00:10:47.163 23467.039 - 23581.513: 99.5281% ( 5) 00:10:47.163 23581.513 - 23695.986: 99.5536% ( 4) 00:10:47.163 23695.986 - 23810.459: 99.5791% ( 4) 00:10:47.163 23810.459 - 23924.933: 99.5918% ( 2) 00:10:47.163 28847.287 - 28961.761: 99.5982% ( 1) 00:10:47.163 28961.761 - 29076.234: 99.6237% ( 4) 00:10:47.163 29076.234 - 29190.707: 99.6492% ( 4) 00:10:47.163 29190.707 - 29305.181: 99.6747% ( 4) 00:10:47.163 29305.181 - 29534.128: 99.7194% ( 7) 00:10:47.163 29534.128 - 29763.074: 99.7704% ( 8) 00:10:47.163 29763.074 - 29992.021: 99.8151% ( 7) 00:10:47.163 29992.021 - 30220.968: 99.8597% ( 7) 00:10:47.163 30220.968 - 30449.914: 99.9043% ( 7) 00:10:47.163 30449.914 - 30678.861: 99.9490% ( 7) 00:10:47.163 30678.861 - 30907.808: 99.9936% ( 7) 00:10:47.163 30907.808 - 31136.755: 100.0000% ( 1) 00:10:47.163 00:10:47.163 15:06:24 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:48.542 Initializing NVMe Controllers 00:10:48.542 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:48.542 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:48.542 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:48.542 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:48.542 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:48.542 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:48.542 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:48.542 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:48.542 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:48.542 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:48.542 Initialization complete. Launching workers. 00:10:48.542 ======================================================== 00:10:48.542 Latency(us) 00:10:48.543 Device Information : IOPS MiB/s Average min max 00:10:48.543 PCIE (0000:00:10.0) NSID 1 from core 0: 9247.37 108.37 13886.37 8973.17 45316.93 00:10:48.543 PCIE (0000:00:11.0) NSID 1 from core 0: 9247.37 108.37 13865.26 9052.79 44043.64 00:10:48.543 PCIE (0000:00:13.0) NSID 1 from core 0: 9247.37 108.37 13844.58 9268.58 43442.12 00:10:48.543 PCIE (0000:00:12.0) NSID 1 from core 0: 9247.37 108.37 13824.08 9168.47 42312.78 00:10:48.543 PCIE (0000:00:12.0) NSID 2 from core 0: 9247.37 108.37 13803.82 9287.58 40894.97 00:10:48.543 PCIE (0000:00:12.0) NSID 3 from core 0: 9311.14 109.11 13689.06 9094.00 29291.25 00:10:48.543 ======================================================== 00:10:48.543 Total : 55547.97 650.95 13818.71 8973.17 45316.93 00:10:48.543 00:10:48.543 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:48.543 ================================================================================= 00:10:48.543 1.00000% : 9501.289us 00:10:48.543 10.00000% : 10245.366us 00:10:48.543 25.00000% : 11447.336us 00:10:48.543 50.00000% : 13736.803us 00:10:48.543 75.00000% : 15682.851us 00:10:48.543 90.00000% : 17171.004us 00:10:48.543 95.00000% : 17972.318us 00:10:48.543 98.00000% : 19002.578us 00:10:48.543 99.00000% : 34342.009us 00:10:48.543 99.50000% : 43041.984us 00:10:48.543 99.90000% : 44873.558us 00:10:48.543 99.99000% : 45331.452us 00:10:48.543 99.99900% : 45331.452us 00:10:48.543 99.99990% : 45331.452us 00:10:48.543 99.99999% : 45331.452us 00:10:48.543 00:10:48.543 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:48.543 ================================================================================= 00:10:48.543 1.00000% : 9558.526us 00:10:48.543 10.00000% : 10130.893us 00:10:48.543 25.00000% : 11447.336us 00:10:48.543 50.00000% : 13794.040us 00:10:48.543 75.00000% : 15682.851us 00:10:48.543 90.00000% : 17056.531us 00:10:48.543 95.00000% : 18201.265us 00:10:48.543 98.00000% : 18773.631us 00:10:48.543 99.00000% : 32739.382us 00:10:48.543 99.50000% : 42126.197us 00:10:48.543 99.90000% : 43728.824us 00:10:48.543 99.99000% : 44186.718us 00:10:48.543 99.99900% : 44186.718us 00:10:48.543 99.99990% : 44186.718us 00:10:48.543 99.99999% : 44186.718us 00:10:48.543 00:10:48.543 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:48.543 ================================================================================= 00:10:48.543 1.00000% : 9615.762us 00:10:48.543 10.00000% : 10073.656us 00:10:48.543 25.00000% : 11561.810us 00:10:48.543 50.00000% : 13679.567us 00:10:48.543 75.00000% : 15682.851us 00:10:48.543 90.00000% : 17056.531us 00:10:48.543 95.00000% : 17857.845us 00:10:48.543 98.00000% : 18773.631us 00:10:48.543 99.00000% : 32510.435us 00:10:48.543 99.50000% : 41439.357us 00:10:48.543 99.90000% : 43270.931us 00:10:48.543 99.99000% : 43499.878us 00:10:48.543 99.99900% : 43499.878us 00:10:48.543 99.99990% : 43499.878us 00:10:48.543 99.99999% : 43499.878us 00:10:48.543 00:10:48.543 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:48.543 ================================================================================= 00:10:48.543 1.00000% : 9615.762us 00:10:48.543 10.00000% : 10073.656us 00:10:48.543 25.00000% : 11504.573us 00:10:48.543 50.00000% : 13622.330us 00:10:48.543 75.00000% : 15797.324us 00:10:48.543 90.00000% : 16942.058us 00:10:48.543 95.00000% : 17857.845us 00:10:48.543 98.00000% : 19117.052us 00:10:48.543 99.00000% : 30449.914us 00:10:48.543 99.50000% : 40294.624us 00:10:48.543 99.90000% : 42126.197us 00:10:48.543 99.99000% : 42355.144us 00:10:48.543 99.99900% : 42355.144us 00:10:48.543 99.99990% : 42355.144us 00:10:48.543 99.99999% : 42355.144us 00:10:48.543 00:10:48.543 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:48.543 ================================================================================= 00:10:48.543 1.00000% : 9615.762us 00:10:48.543 10.00000% : 10073.656us 00:10:48.543 25.00000% : 11504.573us 00:10:48.543 50.00000% : 13507.857us 00:10:48.543 75.00000% : 15797.324us 00:10:48.543 90.00000% : 17056.531us 00:10:48.543 95.00000% : 17972.318us 00:10:48.543 98.00000% : 19002.578us 00:10:48.543 99.00000% : 29076.234us 00:10:48.543 99.50000% : 39149.890us 00:10:48.543 99.90000% : 40752.517us 00:10:48.543 99.99000% : 40981.464us 00:10:48.543 99.99900% : 40981.464us 00:10:48.543 99.99990% : 40981.464us 00:10:48.543 99.99999% : 40981.464us 00:10:48.543 00:10:48.543 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:48.543 ================================================================================= 00:10:48.543 1.00000% : 9672.999us 00:10:48.543 10.00000% : 10073.656us 00:10:48.543 25.00000% : 11390.100us 00:10:48.543 50.00000% : 13794.040us 00:10:48.543 75.00000% : 15682.851us 00:10:48.543 90.00000% : 17171.004us 00:10:48.543 95.00000% : 17743.371us 00:10:48.543 98.00000% : 19002.578us 00:10:48.543 99.00000% : 20261.785us 00:10:48.543 99.50000% : 28045.974us 00:10:48.543 99.90000% : 29076.234us 00:10:48.543 99.99000% : 29305.181us 00:10:48.543 99.99900% : 29305.181us 00:10:48.543 99.99990% : 29305.181us 00:10:48.543 99.99999% : 29305.181us 00:10:48.543 00:10:48.543 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:48.543 ============================================================================== 00:10:48.543 Range in us Cumulative IO count 00:10:48.543 8928.922 - 8986.159: 0.0108% ( 1) 00:10:48.543 9043.396 - 9100.632: 0.0216% ( 1) 00:10:48.543 9100.632 - 9157.869: 0.0323% ( 1) 00:10:48.543 9157.869 - 9215.106: 0.0754% ( 4) 00:10:48.543 9215.106 - 9272.342: 0.2047% ( 12) 00:10:48.543 9272.342 - 9329.579: 0.3125% ( 10) 00:10:48.543 9329.579 - 9386.816: 0.6789% ( 34) 00:10:48.543 9386.816 - 9444.052: 0.9698% ( 27) 00:10:48.543 9444.052 - 9501.289: 1.5517% ( 54) 00:10:48.543 9501.289 - 9558.526: 2.1875% ( 59) 00:10:48.543 9558.526 - 9615.762: 2.8664% ( 63) 00:10:48.543 9615.762 - 9672.999: 3.4052% ( 50) 00:10:48.543 9672.999 - 9730.236: 4.0194% ( 57) 00:10:48.543 9730.236 - 9787.472: 4.8276% ( 75) 00:10:48.543 9787.472 - 9844.709: 5.4634% ( 59) 00:10:48.543 9844.709 - 9901.946: 5.9591% ( 46) 00:10:48.543 9901.946 - 9959.183: 6.5409% ( 54) 00:10:48.543 9959.183 - 10016.419: 7.3276% ( 73) 00:10:48.543 10016.419 - 10073.656: 8.1681% ( 78) 00:10:48.543 10073.656 - 10130.893: 8.9547% ( 73) 00:10:48.543 10130.893 - 10188.129: 9.8276% ( 81) 00:10:48.543 10188.129 - 10245.366: 10.7543% ( 86) 00:10:48.543 10245.366 - 10302.603: 11.5948% ( 78) 00:10:48.543 10302.603 - 10359.839: 12.3599% ( 71) 00:10:48.543 10359.839 - 10417.076: 12.9095% ( 51) 00:10:48.543 10417.076 - 10474.313: 13.5129% ( 56) 00:10:48.543 10474.313 - 10531.549: 13.9978% ( 45) 00:10:48.543 10531.549 - 10588.786: 14.3750% ( 35) 00:10:48.543 10588.786 - 10646.023: 14.8276% ( 42) 00:10:48.543 10646.023 - 10703.259: 15.3017% ( 44) 00:10:48.543 10703.259 - 10760.496: 15.7759% ( 44) 00:10:48.543 10760.496 - 10817.733: 16.3362% ( 52) 00:10:48.543 10817.733 - 10874.969: 16.9720% ( 59) 00:10:48.543 10874.969 - 10932.206: 17.6940% ( 67) 00:10:48.543 10932.206 - 10989.443: 18.4052% ( 66) 00:10:48.543 10989.443 - 11046.679: 19.3427% ( 87) 00:10:48.543 11046.679 - 11103.916: 20.2155% ( 81) 00:10:48.543 11103.916 - 11161.153: 20.9914% ( 72) 00:10:48.543 11161.153 - 11218.390: 21.9397% ( 88) 00:10:48.543 11218.390 - 11275.626: 22.9310% ( 92) 00:10:48.543 11275.626 - 11332.863: 23.8901% ( 89) 00:10:48.543 11332.863 - 11390.100: 24.8276% ( 87) 00:10:48.543 11390.100 - 11447.336: 25.8297% ( 93) 00:10:48.543 11447.336 - 11504.573: 26.8966% ( 99) 00:10:48.543 11504.573 - 11561.810: 27.9957% ( 102) 00:10:48.543 11561.810 - 11619.046: 28.9332% ( 87) 00:10:48.543 11619.046 - 11676.283: 29.7629% ( 77) 00:10:48.543 11676.283 - 11733.520: 30.4957% ( 68) 00:10:48.543 11733.520 - 11790.756: 31.1530% ( 61) 00:10:48.543 11790.756 - 11847.993: 32.0582% ( 84) 00:10:48.543 11847.993 - 11905.230: 32.8664% ( 75) 00:10:48.543 11905.230 - 11962.466: 33.6207% ( 70) 00:10:48.543 11962.466 - 12019.703: 34.3966% ( 72) 00:10:48.543 12019.703 - 12076.940: 35.0431% ( 60) 00:10:48.543 12076.940 - 12134.176: 35.8405% ( 74) 00:10:48.543 12134.176 - 12191.413: 36.4763% ( 59) 00:10:48.543 12191.413 - 12248.650: 37.1983% ( 67) 00:10:48.543 12248.650 - 12305.886: 37.6940% ( 46) 00:10:48.543 12305.886 - 12363.123: 38.1681% ( 44) 00:10:48.543 12363.123 - 12420.360: 38.5453% ( 35) 00:10:48.543 12420.360 - 12477.597: 38.9009% ( 33) 00:10:48.543 12477.597 - 12534.833: 39.3211% ( 39) 00:10:48.543 12534.833 - 12592.070: 39.7414% ( 39) 00:10:48.543 12592.070 - 12649.307: 40.2694% ( 49) 00:10:48.543 12649.307 - 12706.543: 40.8728% ( 56) 00:10:48.543 12706.543 - 12763.780: 41.3254% ( 42) 00:10:48.543 12763.780 - 12821.017: 41.8642% ( 50) 00:10:48.543 12821.017 - 12878.253: 42.5323% ( 62) 00:10:48.543 12878.253 - 12935.490: 43.0388% ( 47) 00:10:48.543 12935.490 - 12992.727: 43.5022% ( 43) 00:10:48.543 12992.727 - 13049.963: 43.8685% ( 34) 00:10:48.543 13049.963 - 13107.200: 44.4720% ( 56) 00:10:48.543 13107.200 - 13164.437: 44.9246% ( 42) 00:10:48.543 13164.437 - 13221.673: 45.5927% ( 62) 00:10:48.543 13221.673 - 13278.910: 46.1315% ( 50) 00:10:48.543 13278.910 - 13336.147: 46.6379% ( 47) 00:10:48.543 13336.147 - 13393.383: 47.1983% ( 52) 00:10:48.543 13393.383 - 13450.620: 47.6401% ( 41) 00:10:48.543 13450.620 - 13507.857: 48.3836% ( 69) 00:10:48.543 13507.857 - 13565.093: 48.9763% ( 55) 00:10:48.543 13565.093 - 13622.330: 49.4720% ( 46) 00:10:48.543 13622.330 - 13679.567: 49.9138% ( 41) 00:10:48.543 13679.567 - 13736.803: 50.5172% ( 56) 00:10:48.543 13736.803 - 13794.040: 51.0453% ( 49) 00:10:48.543 13794.040 - 13851.277: 52.1336% ( 101) 00:10:48.543 13851.277 - 13908.514: 52.9957% ( 80) 00:10:48.543 13908.514 - 13965.750: 53.9440% ( 88) 00:10:48.543 13965.750 - 14022.987: 54.8707% ( 86) 00:10:48.543 14022.987 - 14080.224: 55.8297% ( 89) 00:10:48.543 14080.224 - 14137.460: 56.6595% ( 77) 00:10:48.543 14137.460 - 14194.697: 57.2629% ( 56) 00:10:48.543 14194.697 - 14251.934: 58.0927% ( 77) 00:10:48.543 14251.934 - 14309.170: 58.8039% ( 66) 00:10:48.543 14309.170 - 14366.407: 59.4504% ( 60) 00:10:48.543 14366.407 - 14423.644: 60.2155% ( 71) 00:10:48.543 14423.644 - 14480.880: 60.8405% ( 58) 00:10:48.544 14480.880 - 14538.117: 61.4978% ( 61) 00:10:48.544 14538.117 - 14595.354: 62.0690% ( 53) 00:10:48.544 14595.354 - 14652.590: 62.8664% ( 74) 00:10:48.544 14652.590 - 14767.064: 64.3750% ( 140) 00:10:48.544 14767.064 - 14881.537: 65.9052% ( 142) 00:10:48.544 14881.537 - 14996.010: 67.6616% ( 163) 00:10:48.544 14996.010 - 15110.484: 69.2026% ( 143) 00:10:48.544 15110.484 - 15224.957: 70.6897% ( 138) 00:10:48.544 15224.957 - 15339.431: 72.2091% ( 141) 00:10:48.544 15339.431 - 15453.904: 73.7392% ( 142) 00:10:48.544 15453.904 - 15568.377: 74.8707% ( 105) 00:10:48.544 15568.377 - 15682.851: 76.0991% ( 114) 00:10:48.544 15682.851 - 15797.324: 77.3707% ( 118) 00:10:48.544 15797.324 - 15911.797: 78.8470% ( 137) 00:10:48.544 15911.797 - 16026.271: 80.3448% ( 139) 00:10:48.544 16026.271 - 16140.744: 81.5194% ( 109) 00:10:48.544 16140.744 - 16255.217: 82.6078% ( 101) 00:10:48.544 16255.217 - 16369.691: 83.4159% ( 75) 00:10:48.544 16369.691 - 16484.164: 84.3642% ( 88) 00:10:48.544 16484.164 - 16598.638: 85.2155% ( 79) 00:10:48.544 16598.638 - 16713.111: 86.3901% ( 109) 00:10:48.544 16713.111 - 16827.584: 87.5970% ( 112) 00:10:48.544 16827.584 - 16942.058: 88.7716% ( 109) 00:10:48.544 16942.058 - 17056.531: 89.6983% ( 86) 00:10:48.544 17056.531 - 17171.004: 90.6034% ( 84) 00:10:48.544 17171.004 - 17285.478: 91.3254% ( 67) 00:10:48.544 17285.478 - 17399.951: 92.0151% ( 64) 00:10:48.544 17399.951 - 17514.424: 92.5970% ( 54) 00:10:48.544 17514.424 - 17628.898: 93.4375% ( 78) 00:10:48.544 17628.898 - 17743.371: 94.1056% ( 62) 00:10:48.544 17743.371 - 17857.845: 94.8922% ( 73) 00:10:48.544 17857.845 - 17972.318: 95.5711% ( 63) 00:10:48.544 17972.318 - 18086.791: 96.0991% ( 49) 00:10:48.544 18086.791 - 18201.265: 96.5302% ( 40) 00:10:48.544 18201.265 - 18315.738: 96.8427% ( 29) 00:10:48.544 18315.738 - 18430.211: 97.1659% ( 30) 00:10:48.544 18430.211 - 18544.685: 97.3599% ( 18) 00:10:48.544 18544.685 - 18659.158: 97.6185% ( 24) 00:10:48.544 18659.158 - 18773.631: 97.8556% ( 22) 00:10:48.544 18773.631 - 18888.105: 97.9741% ( 11) 00:10:48.544 18888.105 - 19002.578: 98.0927% ( 11) 00:10:48.544 19002.578 - 19117.052: 98.2543% ( 15) 00:10:48.544 19117.052 - 19231.525: 98.3082% ( 5) 00:10:48.544 19231.525 - 19345.998: 98.3728% ( 6) 00:10:48.544 19345.998 - 19460.472: 98.4375% ( 6) 00:10:48.544 19460.472 - 19574.945: 98.4698% ( 3) 00:10:48.544 19918.365 - 20032.838: 98.4914% ( 2) 00:10:48.544 20032.838 - 20147.312: 98.5237% ( 3) 00:10:48.544 20147.312 - 20261.785: 98.5453% ( 2) 00:10:48.544 20261.785 - 20376.259: 98.5668% ( 2) 00:10:48.544 20376.259 - 20490.732: 98.5884% ( 2) 00:10:48.544 20490.732 - 20605.205: 98.6099% ( 2) 00:10:48.544 20605.205 - 20719.679: 98.6207% ( 1) 00:10:48.544 33426.222 - 33655.169: 98.6315% ( 1) 00:10:48.544 33655.169 - 33884.115: 98.6638% ( 3) 00:10:48.544 33884.115 - 34113.062: 98.8254% ( 15) 00:10:48.544 34113.062 - 34342.009: 99.0194% ( 18) 00:10:48.544 34342.009 - 34570.955: 99.1272% ( 10) 00:10:48.544 34570.955 - 34799.902: 99.2241% ( 9) 00:10:48.544 34799.902 - 35028.849: 99.2996% ( 7) 00:10:48.544 35028.849 - 35257.796: 99.3103% ( 1) 00:10:48.544 41210.410 - 41439.357: 99.3211% ( 1) 00:10:48.544 41897.251 - 42126.197: 99.3427% ( 2) 00:10:48.544 42126.197 - 42355.144: 99.3858% ( 4) 00:10:48.544 42355.144 - 42584.091: 99.4289% ( 4) 00:10:48.544 42584.091 - 42813.038: 99.4828% ( 5) 00:10:48.544 42813.038 - 43041.984: 99.5151% ( 3) 00:10:48.544 43041.984 - 43270.931: 99.5797% ( 6) 00:10:48.544 43270.931 - 43499.878: 99.6336% ( 5) 00:10:48.544 43499.878 - 43728.824: 99.6767% ( 4) 00:10:48.544 43728.824 - 43957.771: 99.7306% ( 5) 00:10:48.544 43957.771 - 44186.718: 99.7629% ( 3) 00:10:48.544 44186.718 - 44415.665: 99.7953% ( 3) 00:10:48.544 44415.665 - 44644.611: 99.8599% ( 6) 00:10:48.544 44644.611 - 44873.558: 99.9030% ( 4) 00:10:48.544 44873.558 - 45102.505: 99.9569% ( 5) 00:10:48.544 45102.505 - 45331.452: 100.0000% ( 4) 00:10:48.544 00:10:48.544 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:48.544 ============================================================================== 00:10:48.544 Range in us Cumulative IO count 00:10:48.544 9043.396 - 9100.632: 0.0108% ( 1) 00:10:48.544 9215.106 - 9272.342: 0.0539% ( 4) 00:10:48.544 9272.342 - 9329.579: 0.1293% ( 7) 00:10:48.544 9329.579 - 9386.816: 0.2478% ( 11) 00:10:48.544 9386.816 - 9444.052: 0.3772% ( 12) 00:10:48.544 9444.052 - 9501.289: 0.6466% ( 25) 00:10:48.544 9501.289 - 9558.526: 1.0237% ( 35) 00:10:48.544 9558.526 - 9615.762: 1.4332% ( 38) 00:10:48.544 9615.762 - 9672.999: 2.0797% ( 60) 00:10:48.544 9672.999 - 9730.236: 2.8879% ( 75) 00:10:48.544 9730.236 - 9787.472: 3.9547% ( 99) 00:10:48.544 9787.472 - 9844.709: 4.9677% ( 94) 00:10:48.544 9844.709 - 9901.946: 6.2931% ( 123) 00:10:48.544 9901.946 - 9959.183: 7.5216% ( 114) 00:10:48.544 9959.183 - 10016.419: 8.4483% ( 86) 00:10:48.544 10016.419 - 10073.656: 9.4720% ( 95) 00:10:48.544 10073.656 - 10130.893: 10.5496% ( 100) 00:10:48.544 10130.893 - 10188.129: 11.2069% ( 61) 00:10:48.544 10188.129 - 10245.366: 11.8427% ( 59) 00:10:48.544 10245.366 - 10302.603: 12.6185% ( 72) 00:10:48.544 10302.603 - 10359.839: 13.3082% ( 64) 00:10:48.544 10359.839 - 10417.076: 13.9655% ( 61) 00:10:48.544 10417.076 - 10474.313: 14.6444% ( 63) 00:10:48.544 10474.313 - 10531.549: 15.0000% ( 33) 00:10:48.544 10531.549 - 10588.786: 15.3664% ( 34) 00:10:48.544 10588.786 - 10646.023: 15.7974% ( 40) 00:10:48.544 10646.023 - 10703.259: 16.1422% ( 32) 00:10:48.544 10703.259 - 10760.496: 16.3901% ( 23) 00:10:48.544 10760.496 - 10817.733: 16.6487% ( 24) 00:10:48.544 10817.733 - 10874.969: 16.9720% ( 30) 00:10:48.544 10874.969 - 10932.206: 17.3922% ( 39) 00:10:48.544 10932.206 - 10989.443: 17.9957% ( 56) 00:10:48.544 10989.443 - 11046.679: 18.7931% ( 74) 00:10:48.544 11046.679 - 11103.916: 19.7091% ( 85) 00:10:48.544 11103.916 - 11161.153: 20.5496% ( 78) 00:10:48.544 11161.153 - 11218.390: 21.5841% ( 96) 00:10:48.544 11218.390 - 11275.626: 22.3491% ( 71) 00:10:48.544 11275.626 - 11332.863: 23.2866% ( 87) 00:10:48.544 11332.863 - 11390.100: 24.1810% ( 83) 00:10:48.544 11390.100 - 11447.336: 25.3341% ( 107) 00:10:48.544 11447.336 - 11504.573: 26.5409% ( 112) 00:10:48.544 11504.573 - 11561.810: 27.8448% ( 121) 00:10:48.544 11561.810 - 11619.046: 29.2457% ( 130) 00:10:48.544 11619.046 - 11676.283: 30.5603% ( 122) 00:10:48.544 11676.283 - 11733.520: 32.0474% ( 138) 00:10:48.544 11733.520 - 11790.756: 33.2866% ( 115) 00:10:48.544 11790.756 - 11847.993: 34.4073% ( 104) 00:10:48.544 11847.993 - 11905.230: 35.3125% ( 84) 00:10:48.544 11905.230 - 11962.466: 35.9806% ( 62) 00:10:48.544 11962.466 - 12019.703: 36.4763% ( 46) 00:10:48.544 12019.703 - 12076.940: 36.8427% ( 34) 00:10:48.544 12076.940 - 12134.176: 37.2737% ( 40) 00:10:48.544 12134.176 - 12191.413: 37.7478% ( 44) 00:10:48.544 12191.413 - 12248.650: 38.1897% ( 41) 00:10:48.544 12248.650 - 12305.886: 38.6315% ( 41) 00:10:48.544 12305.886 - 12363.123: 39.1487% ( 48) 00:10:48.544 12363.123 - 12420.360: 39.5151% ( 34) 00:10:48.544 12420.360 - 12477.597: 39.7737% ( 24) 00:10:48.544 12477.597 - 12534.833: 40.0647% ( 27) 00:10:48.544 12534.833 - 12592.070: 40.3879% ( 30) 00:10:48.544 12592.070 - 12649.307: 40.7328% ( 32) 00:10:48.544 12649.307 - 12706.543: 41.2284% ( 46) 00:10:48.544 12706.543 - 12763.780: 41.8427% ( 57) 00:10:48.544 12763.780 - 12821.017: 42.5323% ( 64) 00:10:48.544 12821.017 - 12878.253: 43.0280% ( 46) 00:10:48.544 12878.253 - 12935.490: 43.4914% ( 43) 00:10:48.544 12935.490 - 12992.727: 43.9978% ( 47) 00:10:48.544 12992.727 - 13049.963: 44.6444% ( 60) 00:10:48.544 13049.963 - 13107.200: 45.2802% ( 59) 00:10:48.544 13107.200 - 13164.437: 45.7435% ( 43) 00:10:48.544 13164.437 - 13221.673: 46.1961% ( 42) 00:10:48.544 13221.673 - 13278.910: 46.5194% ( 30) 00:10:48.544 13278.910 - 13336.147: 46.8103% ( 27) 00:10:48.544 13336.147 - 13393.383: 47.1228% ( 29) 00:10:48.544 13393.383 - 13450.620: 47.5108% ( 36) 00:10:48.544 13450.620 - 13507.857: 47.8448% ( 31) 00:10:48.544 13507.857 - 13565.093: 48.2759% ( 40) 00:10:48.544 13565.093 - 13622.330: 48.6961% ( 39) 00:10:48.544 13622.330 - 13679.567: 49.2457% ( 51) 00:10:48.544 13679.567 - 13736.803: 49.7845% ( 50) 00:10:48.544 13736.803 - 13794.040: 50.4203% ( 59) 00:10:48.544 13794.040 - 13851.277: 51.0884% ( 62) 00:10:48.544 13851.277 - 13908.514: 52.0259% ( 87) 00:10:48.544 13908.514 - 13965.750: 52.9634% ( 87) 00:10:48.544 13965.750 - 14022.987: 53.7177% ( 70) 00:10:48.544 14022.987 - 14080.224: 54.4397% ( 67) 00:10:48.544 14080.224 - 14137.460: 55.5496% ( 103) 00:10:48.544 14137.460 - 14194.697: 56.6272% ( 100) 00:10:48.544 14194.697 - 14251.934: 57.7371% ( 103) 00:10:48.544 14251.934 - 14309.170: 58.5776% ( 78) 00:10:48.544 14309.170 - 14366.407: 59.0409% ( 43) 00:10:48.544 14366.407 - 14423.644: 59.4504% ( 38) 00:10:48.544 14423.644 - 14480.880: 59.9461% ( 46) 00:10:48.544 14480.880 - 14538.117: 60.3879% ( 41) 00:10:48.544 14538.117 - 14595.354: 60.9159% ( 49) 00:10:48.544 14595.354 - 14652.590: 61.3901% ( 44) 00:10:48.544 14652.590 - 14767.064: 62.6724% ( 119) 00:10:48.544 14767.064 - 14881.537: 63.9978% ( 123) 00:10:48.544 14881.537 - 14996.010: 65.6250% ( 151) 00:10:48.544 14996.010 - 15110.484: 67.4138% ( 166) 00:10:48.544 15110.484 - 15224.957: 69.0409% ( 151) 00:10:48.544 15224.957 - 15339.431: 70.7866% ( 162) 00:10:48.544 15339.431 - 15453.904: 72.7155% ( 179) 00:10:48.544 15453.904 - 15568.377: 74.0194% ( 121) 00:10:48.544 15568.377 - 15682.851: 75.7004% ( 156) 00:10:48.544 15682.851 - 15797.324: 77.3491% ( 153) 00:10:48.544 15797.324 - 15911.797: 78.8039% ( 135) 00:10:48.544 15911.797 - 16026.271: 80.5388% ( 161) 00:10:48.544 16026.271 - 16140.744: 82.4677% ( 179) 00:10:48.544 16140.744 - 16255.217: 84.3966% ( 179) 00:10:48.544 16255.217 - 16369.691: 85.7651% ( 127) 00:10:48.544 16369.691 - 16484.164: 86.7134% ( 88) 00:10:48.544 16484.164 - 16598.638: 87.6509% ( 87) 00:10:48.544 16598.638 - 16713.111: 88.4806% ( 77) 00:10:48.544 16713.111 - 16827.584: 89.2026% ( 67) 00:10:48.544 16827.584 - 16942.058: 89.7306% ( 49) 00:10:48.544 16942.058 - 17056.531: 90.1185% ( 36) 00:10:48.544 17056.531 - 17171.004: 90.5711% ( 42) 00:10:48.545 17171.004 - 17285.478: 91.0668% ( 46) 00:10:48.545 17285.478 - 17399.951: 91.4440% ( 35) 00:10:48.545 17399.951 - 17514.424: 91.8750% ( 40) 00:10:48.545 17514.424 - 17628.898: 92.2737% ( 37) 00:10:48.545 17628.898 - 17743.371: 92.8233% ( 51) 00:10:48.545 17743.371 - 17857.845: 93.3190% ( 46) 00:10:48.545 17857.845 - 17972.318: 93.9224% ( 56) 00:10:48.545 17972.318 - 18086.791: 94.6013% ( 63) 00:10:48.545 18086.791 - 18201.265: 95.1509% ( 51) 00:10:48.545 18201.265 - 18315.738: 95.8190% ( 62) 00:10:48.545 18315.738 - 18430.211: 96.7026% ( 82) 00:10:48.545 18430.211 - 18544.685: 97.2414% ( 50) 00:10:48.545 18544.685 - 18659.158: 97.6724% ( 40) 00:10:48.545 18659.158 - 18773.631: 98.0280% ( 33) 00:10:48.545 18773.631 - 18888.105: 98.2866% ( 24) 00:10:48.545 18888.105 - 19002.578: 98.4159% ( 12) 00:10:48.545 19002.578 - 19117.052: 98.5345% ( 11) 00:10:48.545 19117.052 - 19231.525: 98.6099% ( 7) 00:10:48.545 19231.525 - 19345.998: 98.6207% ( 1) 00:10:48.545 31365.701 - 31594.648: 98.6315% ( 1) 00:10:48.545 31594.648 - 31823.595: 98.7069% ( 7) 00:10:48.545 31823.595 - 32052.541: 98.8039% ( 9) 00:10:48.545 32052.541 - 32281.488: 98.8901% ( 8) 00:10:48.545 32281.488 - 32510.435: 98.9763% ( 8) 00:10:48.545 32510.435 - 32739.382: 99.0517% ( 7) 00:10:48.545 32739.382 - 32968.328: 99.1379% ( 8) 00:10:48.545 32968.328 - 33197.275: 99.2134% ( 7) 00:10:48.545 33197.275 - 33426.222: 99.2996% ( 8) 00:10:48.545 33426.222 - 33655.169: 99.3103% ( 1) 00:10:48.545 40981.464 - 41210.410: 99.3319% ( 2) 00:10:48.545 41210.410 - 41439.357: 99.3858% ( 5) 00:10:48.545 41439.357 - 41668.304: 99.4397% ( 5) 00:10:48.545 41668.304 - 41897.251: 99.4935% ( 5) 00:10:48.545 41897.251 - 42126.197: 99.5474% ( 5) 00:10:48.545 42126.197 - 42355.144: 99.6013% ( 5) 00:10:48.545 42355.144 - 42584.091: 99.6552% ( 5) 00:10:48.545 42584.091 - 42813.038: 99.7091% ( 5) 00:10:48.545 42813.038 - 43041.984: 99.7629% ( 5) 00:10:48.545 43041.984 - 43270.931: 99.8168% ( 5) 00:10:48.545 43270.931 - 43499.878: 99.8707% ( 5) 00:10:48.545 43499.878 - 43728.824: 99.9246% ( 5) 00:10:48.545 43728.824 - 43957.771: 99.9784% ( 5) 00:10:48.545 43957.771 - 44186.718: 100.0000% ( 2) 00:10:48.545 00:10:48.545 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:48.545 ============================================================================== 00:10:48.545 Range in us Cumulative IO count 00:10:48.545 9215.106 - 9272.342: 0.0108% ( 1) 00:10:48.545 9272.342 - 9329.579: 0.0539% ( 4) 00:10:48.545 9329.579 - 9386.816: 0.1185% ( 6) 00:10:48.545 9386.816 - 9444.052: 0.2802% ( 15) 00:10:48.545 9444.052 - 9501.289: 0.5927% ( 29) 00:10:48.545 9501.289 - 9558.526: 0.9591% ( 34) 00:10:48.545 9558.526 - 9615.762: 1.3793% ( 39) 00:10:48.545 9615.762 - 9672.999: 2.0366% ( 61) 00:10:48.545 9672.999 - 9730.236: 2.8879% ( 79) 00:10:48.545 9730.236 - 9787.472: 4.0948% ( 112) 00:10:48.545 9787.472 - 9844.709: 5.4849% ( 129) 00:10:48.545 9844.709 - 9901.946: 7.0582% ( 146) 00:10:48.545 9901.946 - 9959.183: 8.3405% ( 119) 00:10:48.545 9959.183 - 10016.419: 9.4828% ( 106) 00:10:48.545 10016.419 - 10073.656: 10.4310% ( 88) 00:10:48.545 10073.656 - 10130.893: 11.4655% ( 96) 00:10:48.545 10130.893 - 10188.129: 12.0151% ( 51) 00:10:48.545 10188.129 - 10245.366: 12.6078% ( 55) 00:10:48.545 10245.366 - 10302.603: 13.1466% ( 50) 00:10:48.545 10302.603 - 10359.839: 13.3405% ( 18) 00:10:48.545 10359.839 - 10417.076: 13.5129% ( 16) 00:10:48.545 10417.076 - 10474.313: 13.7069% ( 18) 00:10:48.545 10474.313 - 10531.549: 13.9763% ( 25) 00:10:48.545 10531.549 - 10588.786: 14.3642% ( 36) 00:10:48.545 10588.786 - 10646.023: 14.7953% ( 40) 00:10:48.545 10646.023 - 10703.259: 15.1724% ( 35) 00:10:48.545 10703.259 - 10760.496: 15.4957% ( 30) 00:10:48.545 10760.496 - 10817.733: 15.7651% ( 25) 00:10:48.545 10817.733 - 10874.969: 16.1207% ( 33) 00:10:48.545 10874.969 - 10932.206: 16.6487% ( 49) 00:10:48.545 10932.206 - 10989.443: 17.1336% ( 45) 00:10:48.545 10989.443 - 11046.679: 17.7478% ( 57) 00:10:48.545 11046.679 - 11103.916: 18.4267% ( 63) 00:10:48.545 11103.916 - 11161.153: 19.1810% ( 70) 00:10:48.545 11161.153 - 11218.390: 20.0323% ( 79) 00:10:48.545 11218.390 - 11275.626: 21.0453% ( 94) 00:10:48.545 11275.626 - 11332.863: 21.9828% ( 87) 00:10:48.545 11332.863 - 11390.100: 22.9095% ( 86) 00:10:48.545 11390.100 - 11447.336: 23.7500% ( 78) 00:10:48.545 11447.336 - 11504.573: 24.9461% ( 111) 00:10:48.545 11504.573 - 11561.810: 25.9806% ( 96) 00:10:48.545 11561.810 - 11619.046: 26.9828% ( 93) 00:10:48.545 11619.046 - 11676.283: 28.0603% ( 100) 00:10:48.545 11676.283 - 11733.520: 28.9547% ( 83) 00:10:48.545 11733.520 - 11790.756: 30.2478% ( 120) 00:10:48.545 11790.756 - 11847.993: 31.6056% ( 126) 00:10:48.545 11847.993 - 11905.230: 32.4677% ( 80) 00:10:48.545 11905.230 - 11962.466: 33.3728% ( 84) 00:10:48.545 11962.466 - 12019.703: 34.1272% ( 70) 00:10:48.545 12019.703 - 12076.940: 34.9461% ( 76) 00:10:48.545 12076.940 - 12134.176: 35.8621% ( 85) 00:10:48.545 12134.176 - 12191.413: 36.6703% ( 75) 00:10:48.545 12191.413 - 12248.650: 37.5323% ( 80) 00:10:48.545 12248.650 - 12305.886: 38.3082% ( 72) 00:10:48.545 12305.886 - 12363.123: 38.8793% ( 53) 00:10:48.545 12363.123 - 12420.360: 39.4720% ( 55) 00:10:48.545 12420.360 - 12477.597: 40.1832% ( 66) 00:10:48.545 12477.597 - 12534.833: 40.9591% ( 72) 00:10:48.545 12534.833 - 12592.070: 41.5948% ( 59) 00:10:48.545 12592.070 - 12649.307: 42.1659% ( 53) 00:10:48.545 12649.307 - 12706.543: 42.6832% ( 48) 00:10:48.545 12706.543 - 12763.780: 43.3728% ( 64) 00:10:48.545 12763.780 - 12821.017: 43.8578% ( 45) 00:10:48.545 12821.017 - 12878.253: 44.2780% ( 39) 00:10:48.545 12878.253 - 12935.490: 44.6013% ( 30) 00:10:48.545 12935.490 - 12992.727: 45.0970% ( 46) 00:10:48.545 12992.727 - 13049.963: 45.6358% ( 50) 00:10:48.545 13049.963 - 13107.200: 46.0884% ( 42) 00:10:48.545 13107.200 - 13164.437: 46.6056% ( 48) 00:10:48.545 13164.437 - 13221.673: 47.1659% ( 52) 00:10:48.545 13221.673 - 13278.910: 47.6293% ( 43) 00:10:48.545 13278.910 - 13336.147: 47.9741% ( 32) 00:10:48.545 13336.147 - 13393.383: 48.2866% ( 29) 00:10:48.545 13393.383 - 13450.620: 48.5668% ( 26) 00:10:48.545 13450.620 - 13507.857: 48.9116% ( 32) 00:10:48.545 13507.857 - 13565.093: 49.3750% ( 43) 00:10:48.545 13565.093 - 13622.330: 49.6659% ( 27) 00:10:48.545 13622.330 - 13679.567: 50.0539% ( 36) 00:10:48.545 13679.567 - 13736.803: 50.5711% ( 48) 00:10:48.545 13736.803 - 13794.040: 51.1422% ( 53) 00:10:48.545 13794.040 - 13851.277: 52.0474% ( 84) 00:10:48.545 13851.277 - 13908.514: 53.0172% ( 90) 00:10:48.545 13908.514 - 13965.750: 53.8901% ( 81) 00:10:48.545 13965.750 - 14022.987: 54.6659% ( 72) 00:10:48.545 14022.987 - 14080.224: 55.3879% ( 67) 00:10:48.545 14080.224 - 14137.460: 56.2608% ( 81) 00:10:48.545 14137.460 - 14194.697: 57.0905% ( 77) 00:10:48.545 14194.697 - 14251.934: 57.8664% ( 72) 00:10:48.545 14251.934 - 14309.170: 58.7177% ( 79) 00:10:48.545 14309.170 - 14366.407: 59.6552% ( 87) 00:10:48.545 14366.407 - 14423.644: 60.3017% ( 60) 00:10:48.545 14423.644 - 14480.880: 61.0991% ( 74) 00:10:48.545 14480.880 - 14538.117: 61.6487% ( 51) 00:10:48.545 14538.117 - 14595.354: 62.2414% ( 55) 00:10:48.545 14595.354 - 14652.590: 62.8664% ( 58) 00:10:48.545 14652.590 - 14767.064: 64.1272% ( 117) 00:10:48.545 14767.064 - 14881.537: 65.1940% ( 99) 00:10:48.545 14881.537 - 14996.010: 66.2931% ( 102) 00:10:48.545 14996.010 - 15110.484: 67.3491% ( 98) 00:10:48.545 15110.484 - 15224.957: 68.5237% ( 109) 00:10:48.545 15224.957 - 15339.431: 69.8707% ( 125) 00:10:48.545 15339.431 - 15453.904: 71.5302% ( 154) 00:10:48.545 15453.904 - 15568.377: 73.2004% ( 155) 00:10:48.545 15568.377 - 15682.851: 75.3879% ( 203) 00:10:48.545 15682.851 - 15797.324: 76.9397% ( 144) 00:10:48.545 15797.324 - 15911.797: 78.5453% ( 149) 00:10:48.545 15911.797 - 16026.271: 80.1185% ( 146) 00:10:48.545 16026.271 - 16140.744: 81.7349% ( 150) 00:10:48.545 16140.744 - 16255.217: 83.3621% ( 151) 00:10:48.545 16255.217 - 16369.691: 84.8815% ( 141) 00:10:48.545 16369.691 - 16484.164: 85.9914% ( 103) 00:10:48.545 16484.164 - 16598.638: 87.0905% ( 102) 00:10:48.545 16598.638 - 16713.111: 88.0819% ( 92) 00:10:48.545 16713.111 - 16827.584: 89.1056% ( 95) 00:10:48.545 16827.584 - 16942.058: 89.9353% ( 77) 00:10:48.545 16942.058 - 17056.531: 90.9591% ( 95) 00:10:48.545 17056.531 - 17171.004: 92.0582% ( 102) 00:10:48.545 17171.004 - 17285.478: 92.6293% ( 53) 00:10:48.545 17285.478 - 17399.951: 93.2651% ( 59) 00:10:48.545 17399.951 - 17514.424: 93.7931% ( 49) 00:10:48.545 17514.424 - 17628.898: 94.2996% ( 47) 00:10:48.545 17628.898 - 17743.371: 94.7953% ( 46) 00:10:48.545 17743.371 - 17857.845: 95.4634% ( 62) 00:10:48.545 17857.845 - 17972.318: 96.0668% ( 56) 00:10:48.545 17972.318 - 18086.791: 96.4547% ( 36) 00:10:48.545 18086.791 - 18201.265: 96.6703% ( 20) 00:10:48.545 18201.265 - 18315.738: 96.8966% ( 21) 00:10:48.545 18315.738 - 18430.211: 97.2737% ( 35) 00:10:48.545 18430.211 - 18544.685: 97.5754% ( 28) 00:10:48.545 18544.685 - 18659.158: 97.8664% ( 27) 00:10:48.545 18659.158 - 18773.631: 98.0819% ( 20) 00:10:48.545 18773.631 - 18888.105: 98.3944% ( 29) 00:10:48.545 18888.105 - 19002.578: 98.5022% ( 10) 00:10:48.545 19002.578 - 19117.052: 98.5560% ( 5) 00:10:48.545 19117.052 - 19231.525: 98.5884% ( 3) 00:10:48.545 19231.525 - 19345.998: 98.6099% ( 2) 00:10:48.545 19345.998 - 19460.472: 98.6207% ( 1) 00:10:48.545 31365.701 - 31594.648: 98.6853% ( 6) 00:10:48.545 31594.648 - 31823.595: 98.7823% ( 9) 00:10:48.545 31823.595 - 32052.541: 98.8685% ( 8) 00:10:48.545 32052.541 - 32281.488: 98.9547% ( 8) 00:10:48.545 32281.488 - 32510.435: 99.0409% ( 8) 00:10:48.545 32510.435 - 32739.382: 99.1379% ( 9) 00:10:48.545 32739.382 - 32968.328: 99.2241% ( 8) 00:10:48.545 32968.328 - 33197.275: 99.3103% ( 8) 00:10:48.545 40523.570 - 40752.517: 99.3534% ( 4) 00:10:48.545 40752.517 - 40981.464: 99.4181% ( 6) 00:10:48.545 40981.464 - 41210.410: 99.4720% ( 5) 00:10:48.545 41210.410 - 41439.357: 99.5151% ( 4) 00:10:48.545 41439.357 - 41668.304: 99.5690% ( 5) 00:10:48.545 41668.304 - 41897.251: 99.6228% ( 5) 00:10:48.545 41897.251 - 42126.197: 99.6875% ( 6) 00:10:48.545 42126.197 - 42355.144: 99.7414% ( 5) 00:10:48.546 42355.144 - 42584.091: 99.7953% ( 5) 00:10:48.546 42584.091 - 42813.038: 99.8384% ( 4) 00:10:48.546 42813.038 - 43041.984: 99.8922% ( 5) 00:10:48.546 43041.984 - 43270.931: 99.9461% ( 5) 00:10:48.546 43270.931 - 43499.878: 100.0000% ( 5) 00:10:48.546 00:10:48.546 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:48.546 ============================================================================== 00:10:48.546 Range in us Cumulative IO count 00:10:48.546 9157.869 - 9215.106: 0.0108% ( 1) 00:10:48.546 9215.106 - 9272.342: 0.0323% ( 2) 00:10:48.546 9272.342 - 9329.579: 0.0647% ( 3) 00:10:48.546 9329.579 - 9386.816: 0.1401% ( 7) 00:10:48.546 9386.816 - 9444.052: 0.2909% ( 14) 00:10:48.546 9444.052 - 9501.289: 0.4957% ( 19) 00:10:48.546 9501.289 - 9558.526: 0.8728% ( 35) 00:10:48.546 9558.526 - 9615.762: 1.4978% ( 58) 00:10:48.546 9615.762 - 9672.999: 2.1983% ( 65) 00:10:48.546 9672.999 - 9730.236: 3.2651% ( 99) 00:10:48.546 9730.236 - 9787.472: 4.6875% ( 132) 00:10:48.546 9787.472 - 9844.709: 6.0129% ( 123) 00:10:48.546 9844.709 - 9901.946: 7.4246% ( 131) 00:10:48.546 9901.946 - 9959.183: 8.6530% ( 114) 00:10:48.546 9959.183 - 10016.419: 9.8276% ( 109) 00:10:48.546 10016.419 - 10073.656: 10.8405% ( 94) 00:10:48.546 10073.656 - 10130.893: 11.6164% ( 72) 00:10:48.546 10130.893 - 10188.129: 12.4246% ( 75) 00:10:48.546 10188.129 - 10245.366: 12.9418% ( 48) 00:10:48.546 10245.366 - 10302.603: 13.3513% ( 38) 00:10:48.546 10302.603 - 10359.839: 13.6207% ( 25) 00:10:48.546 10359.839 - 10417.076: 13.9440% ( 30) 00:10:48.546 10417.076 - 10474.313: 14.2349% ( 27) 00:10:48.546 10474.313 - 10531.549: 14.7522% ( 48) 00:10:48.546 10531.549 - 10588.786: 14.9030% ( 14) 00:10:48.546 10588.786 - 10646.023: 14.9892% ( 8) 00:10:48.546 10646.023 - 10703.259: 15.1832% ( 18) 00:10:48.546 10703.259 - 10760.496: 15.5603% ( 35) 00:10:48.546 10760.496 - 10817.733: 16.0668% ( 47) 00:10:48.546 10817.733 - 10874.969: 16.7565% ( 64) 00:10:48.546 10874.969 - 10932.206: 17.4030% ( 60) 00:10:48.546 10932.206 - 10989.443: 18.1573% ( 70) 00:10:48.546 10989.443 - 11046.679: 18.8793% ( 67) 00:10:48.546 11046.679 - 11103.916: 19.5690% ( 64) 00:10:48.546 11103.916 - 11161.153: 20.0539% ( 45) 00:10:48.546 11161.153 - 11218.390: 20.7004% ( 60) 00:10:48.546 11218.390 - 11275.626: 21.4009% ( 65) 00:10:48.546 11275.626 - 11332.863: 22.0905% ( 64) 00:10:48.546 11332.863 - 11390.100: 22.9526% ( 80) 00:10:48.546 11390.100 - 11447.336: 23.9871% ( 96) 00:10:48.546 11447.336 - 11504.573: 25.2047% ( 113) 00:10:48.546 11504.573 - 11561.810: 26.6487% ( 134) 00:10:48.546 11561.810 - 11619.046: 27.7155% ( 99) 00:10:48.546 11619.046 - 11676.283: 28.7392% ( 95) 00:10:48.546 11676.283 - 11733.520: 29.4289% ( 64) 00:10:48.546 11733.520 - 11790.756: 30.3017% ( 81) 00:10:48.546 11790.756 - 11847.993: 30.8944% ( 55) 00:10:48.546 11847.993 - 11905.230: 31.5302% ( 59) 00:10:48.546 11905.230 - 11962.466: 32.2629% ( 68) 00:10:48.546 11962.466 - 12019.703: 33.1466% ( 82) 00:10:48.546 12019.703 - 12076.940: 33.9440% ( 74) 00:10:48.546 12076.940 - 12134.176: 34.9138% ( 90) 00:10:48.546 12134.176 - 12191.413: 35.7220% ( 75) 00:10:48.546 12191.413 - 12248.650: 36.5409% ( 76) 00:10:48.546 12248.650 - 12305.886: 37.4353% ( 83) 00:10:48.546 12305.886 - 12363.123: 38.0927% ( 61) 00:10:48.546 12363.123 - 12420.360: 38.7931% ( 65) 00:10:48.546 12420.360 - 12477.597: 39.4073% ( 57) 00:10:48.546 12477.597 - 12534.833: 40.0216% ( 57) 00:10:48.546 12534.833 - 12592.070: 40.6681% ( 60) 00:10:48.546 12592.070 - 12649.307: 41.3254% ( 61) 00:10:48.546 12649.307 - 12706.543: 41.7672% ( 41) 00:10:48.546 12706.543 - 12763.780: 42.2306% ( 43) 00:10:48.546 12763.780 - 12821.017: 42.7155% ( 45) 00:10:48.546 12821.017 - 12878.253: 43.1466% ( 40) 00:10:48.546 12878.253 - 12935.490: 43.7500% ( 56) 00:10:48.546 12935.490 - 12992.727: 44.1595% ( 38) 00:10:48.546 12992.727 - 13049.963: 44.4289% ( 25) 00:10:48.546 13049.963 - 13107.200: 44.6767% ( 23) 00:10:48.546 13107.200 - 13164.437: 45.0216% ( 32) 00:10:48.546 13164.437 - 13221.673: 45.3987% ( 35) 00:10:48.546 13221.673 - 13278.910: 45.8297% ( 40) 00:10:48.546 13278.910 - 13336.147: 46.4009% ( 53) 00:10:48.546 13336.147 - 13393.383: 47.0797% ( 63) 00:10:48.546 13393.383 - 13450.620: 48.1358% ( 98) 00:10:48.546 13450.620 - 13507.857: 48.9871% ( 79) 00:10:48.546 13507.857 - 13565.093: 49.9569% ( 90) 00:10:48.546 13565.093 - 13622.330: 50.9591% ( 93) 00:10:48.546 13622.330 - 13679.567: 51.9289% ( 90) 00:10:48.546 13679.567 - 13736.803: 52.8556% ( 86) 00:10:48.546 13736.803 - 13794.040: 53.5129% ( 61) 00:10:48.546 13794.040 - 13851.277: 54.1487% ( 59) 00:10:48.546 13851.277 - 13908.514: 54.9353% ( 73) 00:10:48.546 13908.514 - 13965.750: 55.7220% ( 73) 00:10:48.546 13965.750 - 14022.987: 56.4871% ( 71) 00:10:48.546 14022.987 - 14080.224: 57.3922% ( 84) 00:10:48.546 14080.224 - 14137.460: 58.2866% ( 83) 00:10:48.546 14137.460 - 14194.697: 58.8362% ( 51) 00:10:48.546 14194.697 - 14251.934: 59.5259% ( 64) 00:10:48.546 14251.934 - 14309.170: 60.2155% ( 64) 00:10:48.546 14309.170 - 14366.407: 61.1099% ( 83) 00:10:48.546 14366.407 - 14423.644: 61.8211% ( 66) 00:10:48.546 14423.644 - 14480.880: 62.3922% ( 53) 00:10:48.546 14480.880 - 14538.117: 62.8664% ( 44) 00:10:48.546 14538.117 - 14595.354: 63.3836% ( 48) 00:10:48.546 14595.354 - 14652.590: 63.7608% ( 35) 00:10:48.546 14652.590 - 14767.064: 64.6228% ( 80) 00:10:48.546 14767.064 - 14881.537: 65.6466% ( 95) 00:10:48.546 14881.537 - 14996.010: 66.9720% ( 123) 00:10:48.546 14996.010 - 15110.484: 68.2974% ( 123) 00:10:48.546 15110.484 - 15224.957: 69.4073% ( 103) 00:10:48.546 15224.957 - 15339.431: 70.7004% ( 120) 00:10:48.546 15339.431 - 15453.904: 71.8642% ( 108) 00:10:48.546 15453.904 - 15568.377: 72.8017% ( 87) 00:10:48.546 15568.377 - 15682.851: 73.8685% ( 99) 00:10:48.546 15682.851 - 15797.324: 75.3879% ( 141) 00:10:48.546 15797.324 - 15911.797: 76.8103% ( 132) 00:10:48.546 15911.797 - 16026.271: 78.0388% ( 114) 00:10:48.546 16026.271 - 16140.744: 79.6767% ( 152) 00:10:48.546 16140.744 - 16255.217: 81.6379% ( 182) 00:10:48.546 16255.217 - 16369.691: 83.4698% ( 170) 00:10:48.546 16369.691 - 16484.164: 85.2909% ( 169) 00:10:48.546 16484.164 - 16598.638: 86.8750% ( 147) 00:10:48.546 16598.638 - 16713.111: 88.2328% ( 126) 00:10:48.546 16713.111 - 16827.584: 89.6444% ( 131) 00:10:48.546 16827.584 - 16942.058: 90.6573% ( 94) 00:10:48.546 16942.058 - 17056.531: 91.5194% ( 80) 00:10:48.546 17056.531 - 17171.004: 92.1875% ( 62) 00:10:48.546 17171.004 - 17285.478: 92.7909% ( 56) 00:10:48.546 17285.478 - 17399.951: 93.4483% ( 61) 00:10:48.546 17399.951 - 17514.424: 93.9440% ( 46) 00:10:48.546 17514.424 - 17628.898: 94.3534% ( 38) 00:10:48.546 17628.898 - 17743.371: 94.9030% ( 51) 00:10:48.546 17743.371 - 17857.845: 95.2371% ( 31) 00:10:48.546 17857.845 - 17972.318: 95.7759% ( 50) 00:10:48.546 17972.318 - 18086.791: 96.1638% ( 36) 00:10:48.546 18086.791 - 18201.265: 96.5409% ( 35) 00:10:48.546 18201.265 - 18315.738: 96.9289% ( 36) 00:10:48.546 18315.738 - 18430.211: 97.1013% ( 16) 00:10:48.546 18430.211 - 18544.685: 97.2522% ( 14) 00:10:48.546 18544.685 - 18659.158: 97.3815% ( 12) 00:10:48.546 18659.158 - 18773.631: 97.5754% ( 18) 00:10:48.546 18773.631 - 18888.105: 97.7909% ( 20) 00:10:48.546 18888.105 - 19002.578: 97.9634% ( 16) 00:10:48.546 19002.578 - 19117.052: 98.3190% ( 33) 00:10:48.546 19117.052 - 19231.525: 98.5129% ( 18) 00:10:48.546 19231.525 - 19345.998: 98.5776% ( 6) 00:10:48.546 19345.998 - 19460.472: 98.6207% ( 4) 00:10:48.546 29305.181 - 29534.128: 98.6530% ( 3) 00:10:48.546 29534.128 - 29763.074: 98.7392% ( 8) 00:10:48.546 29763.074 - 29992.021: 98.8254% ( 8) 00:10:48.546 29992.021 - 30220.968: 98.9116% ( 8) 00:10:48.546 30220.968 - 30449.914: 99.0086% ( 9) 00:10:48.546 30449.914 - 30678.861: 99.0948% ( 8) 00:10:48.546 30678.861 - 30907.808: 99.1918% ( 9) 00:10:48.546 30907.808 - 31136.755: 99.2565% ( 6) 00:10:48.546 31136.755 - 31365.701: 99.3103% ( 5) 00:10:48.546 39378.837 - 39607.783: 99.3427% ( 3) 00:10:48.546 39607.783 - 39836.730: 99.3966% ( 5) 00:10:48.546 39836.730 - 40065.677: 99.4504% ( 5) 00:10:48.546 40065.677 - 40294.624: 99.5043% ( 5) 00:10:48.546 40294.624 - 40523.570: 99.5582% ( 5) 00:10:48.546 40523.570 - 40752.517: 99.6121% ( 5) 00:10:48.546 40752.517 - 40981.464: 99.6659% ( 5) 00:10:48.546 40981.464 - 41210.410: 99.7198% ( 5) 00:10:48.546 41210.410 - 41439.357: 99.7737% ( 5) 00:10:48.546 41439.357 - 41668.304: 99.8276% ( 5) 00:10:48.546 41668.304 - 41897.251: 99.8815% ( 5) 00:10:48.546 41897.251 - 42126.197: 99.9461% ( 6) 00:10:48.546 42126.197 - 42355.144: 100.0000% ( 5) 00:10:48.546 00:10:48.546 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:48.546 ============================================================================== 00:10:48.546 Range in us Cumulative IO count 00:10:48.546 9272.342 - 9329.579: 0.0323% ( 3) 00:10:48.546 9329.579 - 9386.816: 0.1078% ( 7) 00:10:48.546 9386.816 - 9444.052: 0.2478% ( 13) 00:10:48.546 9444.052 - 9501.289: 0.4526% ( 19) 00:10:48.546 9501.289 - 9558.526: 0.9159% ( 43) 00:10:48.546 9558.526 - 9615.762: 1.6056% ( 64) 00:10:48.546 9615.762 - 9672.999: 2.4461% ( 78) 00:10:48.546 9672.999 - 9730.236: 3.5668% ( 104) 00:10:48.546 9730.236 - 9787.472: 4.6983% ( 105) 00:10:48.546 9787.472 - 9844.709: 5.8513% ( 107) 00:10:48.546 9844.709 - 9901.946: 6.9504% ( 102) 00:10:48.546 9901.946 - 9959.183: 8.1681% ( 113) 00:10:48.546 9959.183 - 10016.419: 9.2565% ( 101) 00:10:48.546 10016.419 - 10073.656: 10.4634% ( 112) 00:10:48.546 10073.656 - 10130.893: 11.5841% ( 104) 00:10:48.546 10130.893 - 10188.129: 12.3384% ( 70) 00:10:48.546 10188.129 - 10245.366: 12.9310% ( 55) 00:10:48.546 10245.366 - 10302.603: 13.4267% ( 46) 00:10:48.546 10302.603 - 10359.839: 13.7823% ( 33) 00:10:48.546 10359.839 - 10417.076: 14.1379% ( 33) 00:10:48.546 10417.076 - 10474.313: 14.3642% ( 21) 00:10:48.546 10474.313 - 10531.549: 14.5690% ( 19) 00:10:48.546 10531.549 - 10588.786: 14.7737% ( 19) 00:10:48.546 10588.786 - 10646.023: 15.0323% ( 24) 00:10:48.546 10646.023 - 10703.259: 15.3448% ( 29) 00:10:48.546 10703.259 - 10760.496: 15.8082% ( 43) 00:10:48.546 10760.496 - 10817.733: 16.4655% ( 61) 00:10:48.546 10817.733 - 10874.969: 17.4461% ( 91) 00:10:48.547 10874.969 - 10932.206: 18.2759% ( 77) 00:10:48.547 10932.206 - 10989.443: 19.0625% ( 73) 00:10:48.547 10989.443 - 11046.679: 19.6983% ( 59) 00:10:48.547 11046.679 - 11103.916: 20.2586% ( 52) 00:10:48.547 11103.916 - 11161.153: 20.8297% ( 53) 00:10:48.547 11161.153 - 11218.390: 21.4009% ( 53) 00:10:48.547 11218.390 - 11275.626: 22.1228% ( 67) 00:10:48.547 11275.626 - 11332.863: 22.7694% ( 60) 00:10:48.547 11332.863 - 11390.100: 23.7823% ( 94) 00:10:48.547 11390.100 - 11447.336: 24.9246% ( 106) 00:10:48.547 11447.336 - 11504.573: 25.7004% ( 72) 00:10:48.547 11504.573 - 11561.810: 26.6595% ( 89) 00:10:48.547 11561.810 - 11619.046: 27.5970% ( 87) 00:10:48.547 11619.046 - 11676.283: 28.5560% ( 89) 00:10:48.547 11676.283 - 11733.520: 29.3966% ( 78) 00:10:48.547 11733.520 - 11790.756: 30.2155% ( 76) 00:10:48.547 11790.756 - 11847.993: 31.1638% ( 88) 00:10:48.547 11847.993 - 11905.230: 32.1121% ( 88) 00:10:48.547 11905.230 - 11962.466: 33.2004% ( 101) 00:10:48.547 11962.466 - 12019.703: 34.1272% ( 86) 00:10:48.547 12019.703 - 12076.940: 35.0323% ( 84) 00:10:48.547 12076.940 - 12134.176: 35.6681% ( 59) 00:10:48.547 12134.176 - 12191.413: 36.1207% ( 42) 00:10:48.547 12191.413 - 12248.650: 36.6056% ( 45) 00:10:48.547 12248.650 - 12305.886: 37.0905% ( 45) 00:10:48.547 12305.886 - 12363.123: 37.5970% ( 47) 00:10:48.547 12363.123 - 12420.360: 38.2004% ( 56) 00:10:48.547 12420.360 - 12477.597: 38.7284% ( 49) 00:10:48.547 12477.597 - 12534.833: 39.2888% ( 52) 00:10:48.547 12534.833 - 12592.070: 39.6552% ( 34) 00:10:48.547 12592.070 - 12649.307: 40.0323% ( 35) 00:10:48.547 12649.307 - 12706.543: 40.7328% ( 65) 00:10:48.547 12706.543 - 12763.780: 41.7565% ( 95) 00:10:48.547 12763.780 - 12821.017: 42.4246% ( 62) 00:10:48.547 12821.017 - 12878.253: 42.9849% ( 52) 00:10:48.547 12878.253 - 12935.490: 43.5022% ( 48) 00:10:48.547 12935.490 - 12992.727: 44.0409% ( 50) 00:10:48.547 12992.727 - 13049.963: 44.5690% ( 49) 00:10:48.547 13049.963 - 13107.200: 45.1078% ( 50) 00:10:48.547 13107.200 - 13164.437: 45.6358% ( 49) 00:10:48.547 13164.437 - 13221.673: 46.2608% ( 58) 00:10:48.547 13221.673 - 13278.910: 47.1767% ( 85) 00:10:48.547 13278.910 - 13336.147: 48.1466% ( 90) 00:10:48.547 13336.147 - 13393.383: 49.0625% ( 85) 00:10:48.547 13393.383 - 13450.620: 49.7629% ( 65) 00:10:48.547 13450.620 - 13507.857: 50.3017% ( 50) 00:10:48.547 13507.857 - 13565.093: 50.9267% ( 58) 00:10:48.547 13565.093 - 13622.330: 51.6056% ( 63) 00:10:48.547 13622.330 - 13679.567: 52.2306% ( 58) 00:10:48.547 13679.567 - 13736.803: 53.0819% ( 79) 00:10:48.547 13736.803 - 13794.040: 53.9547% ( 81) 00:10:48.547 13794.040 - 13851.277: 54.6013% ( 60) 00:10:48.547 13851.277 - 13908.514: 55.3448% ( 69) 00:10:48.547 13908.514 - 13965.750: 56.0129% ( 62) 00:10:48.547 13965.750 - 14022.987: 56.7349% ( 67) 00:10:48.547 14022.987 - 14080.224: 57.4030% ( 62) 00:10:48.547 14080.224 - 14137.460: 58.0172% ( 57) 00:10:48.547 14137.460 - 14194.697: 58.6207% ( 56) 00:10:48.547 14194.697 - 14251.934: 59.4181% ( 74) 00:10:48.547 14251.934 - 14309.170: 60.3017% ( 82) 00:10:48.547 14309.170 - 14366.407: 61.1530% ( 79) 00:10:48.547 14366.407 - 14423.644: 61.8750% ( 67) 00:10:48.547 14423.644 - 14480.880: 62.8017% ( 86) 00:10:48.547 14480.880 - 14538.117: 63.5776% ( 72) 00:10:48.547 14538.117 - 14595.354: 64.2134% ( 59) 00:10:48.547 14595.354 - 14652.590: 65.0431% ( 77) 00:10:48.547 14652.590 - 14767.064: 66.6164% ( 146) 00:10:48.547 14767.064 - 14881.537: 67.3599% ( 69) 00:10:48.547 14881.537 - 14996.010: 68.3082% ( 88) 00:10:48.547 14996.010 - 15110.484: 69.1810% ( 81) 00:10:48.547 15110.484 - 15224.957: 70.0539% ( 81) 00:10:48.547 15224.957 - 15339.431: 70.8728% ( 76) 00:10:48.547 15339.431 - 15453.904: 71.7026% ( 77) 00:10:48.547 15453.904 - 15568.377: 72.7047% ( 93) 00:10:48.547 15568.377 - 15682.851: 74.0086% ( 121) 00:10:48.547 15682.851 - 15797.324: 75.3341% ( 123) 00:10:48.547 15797.324 - 15911.797: 76.6164% ( 119) 00:10:48.547 15911.797 - 16026.271: 77.8233% ( 112) 00:10:48.547 16026.271 - 16140.744: 79.0409% ( 113) 00:10:48.547 16140.744 - 16255.217: 80.3556% ( 122) 00:10:48.547 16255.217 - 16369.691: 82.0259% ( 155) 00:10:48.547 16369.691 - 16484.164: 83.4052% ( 128) 00:10:48.547 16484.164 - 16598.638: 85.4634% ( 191) 00:10:48.547 16598.638 - 16713.111: 87.1983% ( 161) 00:10:48.547 16713.111 - 16827.584: 88.6638% ( 136) 00:10:48.547 16827.584 - 16942.058: 89.5690% ( 84) 00:10:48.547 16942.058 - 17056.531: 90.2909% ( 67) 00:10:48.547 17056.531 - 17171.004: 90.8621% ( 53) 00:10:48.547 17171.004 - 17285.478: 91.6487% ( 73) 00:10:48.547 17285.478 - 17399.951: 92.5216% ( 81) 00:10:48.547 17399.951 - 17514.424: 93.2974% ( 72) 00:10:48.547 17514.424 - 17628.898: 93.9440% ( 60) 00:10:48.547 17628.898 - 17743.371: 94.5797% ( 59) 00:10:48.547 17743.371 - 17857.845: 94.9461% ( 34) 00:10:48.547 17857.845 - 17972.318: 95.2802% ( 31) 00:10:48.547 17972.318 - 18086.791: 95.9806% ( 65) 00:10:48.547 18086.791 - 18201.265: 96.5086% ( 49) 00:10:48.547 18201.265 - 18315.738: 96.8966% ( 36) 00:10:48.547 18315.738 - 18430.211: 97.0905% ( 18) 00:10:48.547 18430.211 - 18544.685: 97.2737% ( 17) 00:10:48.547 18544.685 - 18659.158: 97.4138% ( 13) 00:10:48.547 18659.158 - 18773.631: 97.7047% ( 27) 00:10:48.547 18773.631 - 18888.105: 97.8879% ( 17) 00:10:48.547 18888.105 - 19002.578: 98.0603% ( 16) 00:10:48.547 19002.578 - 19117.052: 98.2866% ( 21) 00:10:48.547 19117.052 - 19231.525: 98.4698% ( 17) 00:10:48.547 19231.525 - 19345.998: 98.5884% ( 11) 00:10:48.547 19345.998 - 19460.472: 98.6207% ( 3) 00:10:48.547 27931.500 - 28045.974: 98.6315% ( 1) 00:10:48.547 28045.974 - 28160.447: 98.6746% ( 4) 00:10:48.547 28160.447 - 28274.921: 98.7177% ( 4) 00:10:48.547 28274.921 - 28389.394: 98.7716% ( 5) 00:10:48.547 28389.394 - 28503.867: 98.8147% ( 4) 00:10:48.547 28503.867 - 28618.341: 98.8578% ( 4) 00:10:48.547 28618.341 - 28732.814: 98.9009% ( 4) 00:10:48.547 28732.814 - 28847.287: 98.9440% ( 4) 00:10:48.547 28847.287 - 28961.761: 98.9871% ( 4) 00:10:48.547 28961.761 - 29076.234: 99.0409% ( 5) 00:10:48.547 29076.234 - 29190.707: 99.0733% ( 3) 00:10:48.547 29190.707 - 29305.181: 99.1164% ( 4) 00:10:48.547 29305.181 - 29534.128: 99.2026% ( 8) 00:10:48.547 29534.128 - 29763.074: 99.2888% ( 8) 00:10:48.547 29763.074 - 29992.021: 99.3103% ( 2) 00:10:48.547 38234.103 - 38463.050: 99.3319% ( 2) 00:10:48.547 38463.050 - 38691.997: 99.3966% ( 6) 00:10:48.547 38691.997 - 38920.943: 99.4720% ( 7) 00:10:48.547 38920.943 - 39149.890: 99.5259% ( 5) 00:10:48.547 39149.890 - 39378.837: 99.5905% ( 6) 00:10:48.547 39378.837 - 39607.783: 99.6552% ( 6) 00:10:48.547 39607.783 - 39836.730: 99.7091% ( 5) 00:10:48.547 39836.730 - 40065.677: 99.7737% ( 6) 00:10:48.547 40065.677 - 40294.624: 99.8384% ( 6) 00:10:48.547 40294.624 - 40523.570: 99.8922% ( 5) 00:10:48.547 40523.570 - 40752.517: 99.9569% ( 6) 00:10:48.547 40752.517 - 40981.464: 100.0000% ( 4) 00:10:48.547 00:10:48.547 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:48.547 ============================================================================== 00:10:48.547 Range in us Cumulative IO count 00:10:48.547 9043.396 - 9100.632: 0.0107% ( 1) 00:10:48.547 9157.869 - 9215.106: 0.0214% ( 1) 00:10:48.547 9272.342 - 9329.579: 0.0321% ( 1) 00:10:48.547 9329.579 - 9386.816: 0.0428% ( 1) 00:10:48.547 9386.816 - 9444.052: 0.0856% ( 4) 00:10:48.547 9444.052 - 9501.289: 0.2033% ( 11) 00:10:48.547 9501.289 - 9558.526: 0.4602% ( 24) 00:10:48.547 9558.526 - 9615.762: 0.9525% ( 46) 00:10:48.547 9615.762 - 9672.999: 1.7016% ( 70) 00:10:48.547 9672.999 - 9730.236: 2.8467% ( 107) 00:10:48.547 9730.236 - 9787.472: 4.0454% ( 112) 00:10:48.547 9787.472 - 9844.709: 5.3510% ( 122) 00:10:48.547 9844.709 - 9901.946: 6.8279% ( 138) 00:10:48.547 9901.946 - 9959.183: 8.1443% ( 123) 00:10:48.547 9959.183 - 10016.419: 9.2359% ( 102) 00:10:48.547 10016.419 - 10073.656: 10.1241% ( 83) 00:10:48.547 10073.656 - 10130.893: 10.9482% ( 77) 00:10:48.547 10130.893 - 10188.129: 11.9114% ( 90) 00:10:48.547 10188.129 - 10245.366: 12.4251% ( 48) 00:10:48.547 10245.366 - 10302.603: 12.7997% ( 35) 00:10:48.547 10302.603 - 10359.839: 13.1207% ( 30) 00:10:48.547 10359.839 - 10417.076: 13.6451% ( 49) 00:10:48.547 10417.076 - 10474.313: 13.9876% ( 32) 00:10:48.548 10474.313 - 10531.549: 14.1802% ( 18) 00:10:48.548 10531.549 - 10588.786: 14.5013% ( 30) 00:10:48.548 10588.786 - 10646.023: 15.1006% ( 56) 00:10:48.548 10646.023 - 10703.259: 15.6571% ( 52) 00:10:48.548 10703.259 - 10760.496: 16.2671% ( 57) 00:10:48.548 10760.496 - 10817.733: 17.0377% ( 72) 00:10:48.548 10817.733 - 10874.969: 17.6691% ( 59) 00:10:48.548 10874.969 - 10932.206: 18.5146% ( 79) 00:10:48.548 10932.206 - 10989.443: 19.1995% ( 64) 00:10:48.548 10989.443 - 11046.679: 19.8202% ( 58) 00:10:48.548 11046.679 - 11103.916: 20.3767% ( 52) 00:10:48.548 11103.916 - 11161.153: 21.2436% ( 81) 00:10:48.548 11161.153 - 11218.390: 22.0890% ( 79) 00:10:48.548 11218.390 - 11275.626: 23.1807% ( 102) 00:10:48.548 11275.626 - 11332.863: 24.1652% ( 92) 00:10:48.548 11332.863 - 11390.100: 25.1070% ( 88) 00:10:48.548 11390.100 - 11447.336: 26.3699% ( 118) 00:10:48.548 11447.336 - 11504.573: 27.3438% ( 91) 00:10:48.548 11504.573 - 11561.810: 28.3711% ( 96) 00:10:48.548 11561.810 - 11619.046: 29.5698% ( 112) 00:10:48.548 11619.046 - 11676.283: 30.4902% ( 86) 00:10:48.548 11676.283 - 11733.520: 31.6567% ( 109) 00:10:48.548 11733.520 - 11790.756: 32.6520% ( 93) 00:10:48.548 11790.756 - 11847.993: 33.5509% ( 84) 00:10:48.548 11847.993 - 11905.230: 34.2466% ( 65) 00:10:48.548 11905.230 - 11962.466: 34.8673% ( 58) 00:10:48.548 11962.466 - 12019.703: 35.4880% ( 58) 00:10:48.548 12019.703 - 12076.940: 35.7663% ( 26) 00:10:48.548 12076.940 - 12134.176: 36.0766% ( 29) 00:10:48.548 12134.176 - 12191.413: 36.3763% ( 28) 00:10:48.548 12191.413 - 12248.650: 36.6117% ( 22) 00:10:48.548 12248.650 - 12305.886: 36.9007% ( 27) 00:10:48.548 12305.886 - 12363.123: 37.2646% ( 34) 00:10:48.548 12363.123 - 12420.360: 37.9174% ( 61) 00:10:48.548 12420.360 - 12477.597: 38.5595% ( 60) 00:10:48.548 12477.597 - 12534.833: 39.1267% ( 53) 00:10:48.548 12534.833 - 12592.070: 39.6832% ( 52) 00:10:48.548 12592.070 - 12649.307: 40.0578% ( 35) 00:10:48.548 12649.307 - 12706.543: 40.4859% ( 40) 00:10:48.548 12706.543 - 12763.780: 40.8818% ( 37) 00:10:48.548 12763.780 - 12821.017: 41.2029% ( 30) 00:10:48.548 12821.017 - 12878.253: 41.3955% ( 18) 00:10:48.548 12878.253 - 12935.490: 41.6952% ( 28) 00:10:48.548 12935.490 - 12992.727: 42.1768% ( 45) 00:10:48.548 12992.727 - 13049.963: 43.0009% ( 77) 00:10:48.548 13049.963 - 13107.200: 43.8249% ( 77) 00:10:48.548 13107.200 - 13164.437: 44.4563% ( 59) 00:10:48.548 13164.437 - 13221.673: 44.9914% ( 50) 00:10:48.548 13221.673 - 13278.910: 45.5051% ( 48) 00:10:48.548 13278.910 - 13336.147: 45.9760% ( 44) 00:10:48.548 13336.147 - 13393.383: 46.5218% ( 51) 00:10:48.548 13393.383 - 13450.620: 47.1640% ( 60) 00:10:48.548 13450.620 - 13507.857: 47.6455% ( 45) 00:10:48.548 13507.857 - 13565.093: 48.0843% ( 41) 00:10:48.548 13565.093 - 13622.330: 48.5445% ( 43) 00:10:48.548 13622.330 - 13679.567: 49.1759% ( 59) 00:10:48.548 13679.567 - 13736.803: 49.8288% ( 61) 00:10:48.548 13736.803 - 13794.040: 50.4174% ( 55) 00:10:48.548 13794.040 - 13851.277: 51.1879% ( 72) 00:10:48.548 13851.277 - 13908.514: 51.8408% ( 61) 00:10:48.548 13908.514 - 13965.750: 52.5792% ( 69) 00:10:48.548 13965.750 - 14022.987: 53.4033% ( 77) 00:10:48.548 14022.987 - 14080.224: 54.2059% ( 75) 00:10:48.548 14080.224 - 14137.460: 54.8801% ( 63) 00:10:48.548 14137.460 - 14194.697: 55.4902% ( 57) 00:10:48.548 14194.697 - 14251.934: 56.3249% ( 78) 00:10:48.548 14251.934 - 14309.170: 57.4058% ( 101) 00:10:48.548 14309.170 - 14366.407: 58.3690% ( 90) 00:10:48.548 14366.407 - 14423.644: 59.3643% ( 93) 00:10:48.548 14423.644 - 14480.880: 60.5094% ( 107) 00:10:48.548 14480.880 - 14538.117: 61.4405% ( 87) 00:10:48.548 14538.117 - 14595.354: 62.4465% ( 94) 00:10:48.548 14595.354 - 14652.590: 63.3883% ( 88) 00:10:48.548 14652.590 - 14767.064: 65.0471% ( 155) 00:10:48.548 14767.064 - 14881.537: 66.9735% ( 180) 00:10:48.548 14881.537 - 14996.010: 68.4717% ( 140) 00:10:48.548 14996.010 - 15110.484: 69.5955% ( 105) 00:10:48.548 15110.484 - 15224.957: 70.7192% ( 105) 00:10:48.548 15224.957 - 15339.431: 71.7573% ( 97) 00:10:48.548 15339.431 - 15453.904: 72.9773% ( 114) 00:10:48.548 15453.904 - 15568.377: 74.0047% ( 96) 00:10:48.548 15568.377 - 15682.851: 75.1498% ( 107) 00:10:48.548 15682.851 - 15797.324: 76.4555% ( 122) 00:10:48.548 15797.324 - 15911.797: 77.4936% ( 97) 00:10:48.548 15911.797 - 16026.271: 78.3818% ( 83) 00:10:48.548 16026.271 - 16140.744: 79.9872% ( 150) 00:10:48.548 16140.744 - 16255.217: 81.5497% ( 146) 00:10:48.548 16255.217 - 16369.691: 82.4486% ( 84) 00:10:48.548 16369.691 - 16484.164: 83.2941% ( 79) 00:10:48.548 16484.164 - 16598.638: 84.2038% ( 85) 00:10:48.548 16598.638 - 16713.111: 85.5308% ( 124) 00:10:48.548 16713.111 - 16827.584: 86.8258% ( 121) 00:10:48.548 16827.584 - 16942.058: 88.3027% ( 138) 00:10:48.548 16942.058 - 17056.531: 89.5762% ( 119) 00:10:48.548 17056.531 - 17171.004: 90.3574% ( 73) 00:10:48.548 17171.004 - 17285.478: 91.4705% ( 104) 00:10:48.548 17285.478 - 17399.951: 92.7226% ( 117) 00:10:48.548 17399.951 - 17514.424: 93.8677% ( 107) 00:10:48.548 17514.424 - 17628.898: 94.7560% ( 83) 00:10:48.548 17628.898 - 17743.371: 95.5372% ( 73) 00:10:48.548 17743.371 - 17857.845: 96.3078% ( 72) 00:10:48.548 17857.845 - 17972.318: 97.0462% ( 69) 00:10:48.548 17972.318 - 18086.791: 97.4636% ( 39) 00:10:48.548 18086.791 - 18201.265: 97.6884% ( 21) 00:10:48.548 18201.265 - 18315.738: 97.8596% ( 16) 00:10:48.548 18315.738 - 18430.211: 97.9238% ( 6) 00:10:48.548 18430.211 - 18544.685: 97.9452% ( 2) 00:10:48.548 18888.105 - 19002.578: 98.0094% ( 6) 00:10:48.548 19002.578 - 19117.052: 98.0736% ( 6) 00:10:48.548 19117.052 - 19231.525: 98.1485% ( 7) 00:10:48.548 19231.525 - 19345.998: 98.2556% ( 10) 00:10:48.548 19345.998 - 19460.472: 98.3626% ( 10) 00:10:48.548 19460.472 - 19574.945: 98.4696% ( 10) 00:10:48.548 19574.945 - 19689.418: 98.5766% ( 10) 00:10:48.548 19689.418 - 19803.892: 98.6729% ( 9) 00:10:48.548 19803.892 - 19918.365: 98.7800% ( 10) 00:10:48.548 19918.365 - 20032.838: 98.8763% ( 9) 00:10:48.548 20032.838 - 20147.312: 98.9619% ( 8) 00:10:48.548 20147.312 - 20261.785: 99.0368% ( 7) 00:10:48.548 20261.785 - 20376.259: 99.0796% ( 4) 00:10:48.548 20376.259 - 20490.732: 99.1331% ( 5) 00:10:48.548 20490.732 - 20605.205: 99.1759% ( 4) 00:10:48.548 20605.205 - 20719.679: 99.2295% ( 5) 00:10:48.548 20719.679 - 20834.152: 99.2830% ( 5) 00:10:48.548 20834.152 - 20948.625: 99.3151% ( 3) 00:10:48.548 27359.134 - 27473.607: 99.3258% ( 1) 00:10:48.548 27473.607 - 27588.080: 99.3686% ( 4) 00:10:48.548 27588.080 - 27702.554: 99.4114% ( 4) 00:10:48.548 27702.554 - 27817.027: 99.4542% ( 4) 00:10:48.548 27817.027 - 27931.500: 99.4970% ( 4) 00:10:48.548 27931.500 - 28045.974: 99.5398% ( 4) 00:10:48.548 28045.974 - 28160.447: 99.5826% ( 4) 00:10:48.548 28160.447 - 28274.921: 99.6147% ( 3) 00:10:48.548 28274.921 - 28389.394: 99.6575% ( 4) 00:10:48.548 28389.394 - 28503.867: 99.7003% ( 4) 00:10:48.548 28503.867 - 28618.341: 99.7432% ( 4) 00:10:48.548 28618.341 - 28732.814: 99.7860% ( 4) 00:10:48.548 28732.814 - 28847.287: 99.8288% ( 4) 00:10:48.548 28847.287 - 28961.761: 99.8716% ( 4) 00:10:48.548 28961.761 - 29076.234: 99.9144% ( 4) 00:10:48.548 29076.234 - 29190.707: 99.9572% ( 4) 00:10:48.548 29190.707 - 29305.181: 100.0000% ( 4) 00:10:48.548 00:10:48.548 15:06:26 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:48.548 00:10:48.548 real 0m2.593s 00:10:48.548 user 0m2.238s 00:10:48.548 sys 0m0.253s 00:10:48.548 15:06:26 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.548 15:06:26 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:48.548 ************************************ 00:10:48.548 END TEST nvme_perf 00:10:48.548 ************************************ 00:10:48.548 15:06:26 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:48.548 15:06:26 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:48.548 15:06:26 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:48.548 15:06:26 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.548 15:06:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:48.548 ************************************ 00:10:48.548 START TEST nvme_hello_world 00:10:48.548 ************************************ 00:10:48.548 15:06:26 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:48.548 Initializing NVMe Controllers 00:10:48.548 Attached to 0000:00:10.0 00:10:48.548 Namespace ID: 1 size: 6GB 00:10:48.548 Attached to 0000:00:11.0 00:10:48.548 Namespace ID: 1 size: 5GB 00:10:48.548 Attached to 0000:00:13.0 00:10:48.548 Namespace ID: 1 size: 1GB 00:10:48.548 Attached to 0000:00:12.0 00:10:48.548 Namespace ID: 1 size: 4GB 00:10:48.548 Namespace ID: 2 size: 4GB 00:10:48.548 Namespace ID: 3 size: 4GB 00:10:48.548 Initialization complete. 00:10:48.548 INFO: using host memory buffer for IO 00:10:48.548 Hello world! 00:10:48.548 INFO: using host memory buffer for IO 00:10:48.548 Hello world! 00:10:48.548 INFO: using host memory buffer for IO 00:10:48.548 Hello world! 00:10:48.548 INFO: using host memory buffer for IO 00:10:48.548 Hello world! 00:10:48.548 INFO: using host memory buffer for IO 00:10:48.548 Hello world! 00:10:48.548 INFO: using host memory buffer for IO 00:10:48.548 Hello world! 00:10:48.548 00:10:48.548 real 0m0.258s 00:10:48.548 user 0m0.096s 00:10:48.548 sys 0m0.121s 00:10:48.548 15:06:26 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.548 15:06:26 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:48.548 ************************************ 00:10:48.548 END TEST nvme_hello_world 00:10:48.548 ************************************ 00:10:48.807 15:06:26 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:48.807 15:06:26 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:48.807 15:06:26 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:48.807 15:06:26 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.807 15:06:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:48.807 ************************************ 00:10:48.807 START TEST nvme_sgl 00:10:48.807 ************************************ 00:10:48.807 15:06:26 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:48.807 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:48.807 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:48.807 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:48.807 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:48.807 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:48.807 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:48.807 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:48.807 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:48.807 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:49.067 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:49.067 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:49.067 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:49.067 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:49.067 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:49.067 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:49.067 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:49.067 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:49.067 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:49.067 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:49.067 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:49.067 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:49.067 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:49.067 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:49.067 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:49.067 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:49.067 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:49.067 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:49.067 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:49.067 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:49.067 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:49.067 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:49.067 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:49.067 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:49.067 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:49.067 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:49.067 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:49.067 NVMe Readv/Writev Request test 00:10:49.067 Attached to 0000:00:10.0 00:10:49.067 Attached to 0000:00:11.0 00:10:49.067 Attached to 0000:00:13.0 00:10:49.067 Attached to 0000:00:12.0 00:10:49.067 0000:00:10.0: build_io_request_2 test passed 00:10:49.067 0000:00:10.0: build_io_request_4 test passed 00:10:49.067 0000:00:10.0: build_io_request_5 test passed 00:10:49.067 0000:00:10.0: build_io_request_6 test passed 00:10:49.067 0000:00:10.0: build_io_request_7 test passed 00:10:49.067 0000:00:10.0: build_io_request_10 test passed 00:10:49.067 0000:00:11.0: build_io_request_2 test passed 00:10:49.067 0000:00:11.0: build_io_request_4 test passed 00:10:49.067 0000:00:11.0: build_io_request_5 test passed 00:10:49.067 0000:00:11.0: build_io_request_6 test passed 00:10:49.067 0000:00:11.0: build_io_request_7 test passed 00:10:49.067 0000:00:11.0: build_io_request_10 test passed 00:10:49.067 Cleaning up... 00:10:49.067 00:10:49.067 real 0m0.326s 00:10:49.067 user 0m0.152s 00:10:49.067 sys 0m0.130s 00:10:49.067 15:06:26 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.067 15:06:26 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:49.067 ************************************ 00:10:49.067 END TEST nvme_sgl 00:10:49.067 ************************************ 00:10:49.067 15:06:27 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:49.067 15:06:27 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:49.067 15:06:27 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:49.067 15:06:27 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.067 15:06:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:49.067 ************************************ 00:10:49.067 START TEST nvme_e2edp 00:10:49.067 ************************************ 00:10:49.067 15:06:27 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:49.327 NVMe Write/Read with End-to-End data protection test 00:10:49.327 Attached to 0000:00:10.0 00:10:49.327 Attached to 0000:00:11.0 00:10:49.327 Attached to 0000:00:13.0 00:10:49.327 Attached to 0000:00:12.0 00:10:49.327 Cleaning up... 00:10:49.327 00:10:49.327 real 0m0.252s 00:10:49.327 user 0m0.084s 00:10:49.327 sys 0m0.124s 00:10:49.327 15:06:27 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.327 15:06:27 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:49.327 ************************************ 00:10:49.327 END TEST nvme_e2edp 00:10:49.327 ************************************ 00:10:49.327 15:06:27 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:49.327 15:06:27 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:49.327 15:06:27 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:49.327 15:06:27 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.327 15:06:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:49.327 ************************************ 00:10:49.327 START TEST nvme_reserve 00:10:49.327 ************************************ 00:10:49.327 15:06:27 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:49.586 ===================================================== 00:10:49.586 NVMe Controller at PCI bus 0, device 16, function 0 00:10:49.586 ===================================================== 00:10:49.586 Reservations: Not Supported 00:10:49.587 ===================================================== 00:10:49.587 NVMe Controller at PCI bus 0, device 17, function 0 00:10:49.587 ===================================================== 00:10:49.587 Reservations: Not Supported 00:10:49.587 ===================================================== 00:10:49.587 NVMe Controller at PCI bus 0, device 19, function 0 00:10:49.587 ===================================================== 00:10:49.587 Reservations: Not Supported 00:10:49.587 ===================================================== 00:10:49.587 NVMe Controller at PCI bus 0, device 18, function 0 00:10:49.587 ===================================================== 00:10:49.587 Reservations: Not Supported 00:10:49.587 Reservation test passed 00:10:49.587 00:10:49.587 real 0m0.271s 00:10:49.587 user 0m0.088s 00:10:49.587 sys 0m0.142s 00:10:49.587 15:06:27 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.587 15:06:27 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:49.587 ************************************ 00:10:49.587 END TEST nvme_reserve 00:10:49.587 ************************************ 00:10:49.587 15:06:27 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:49.587 15:06:27 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:49.587 15:06:27 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:49.587 15:06:27 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.587 15:06:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:49.587 ************************************ 00:10:49.587 START TEST nvme_err_injection 00:10:49.587 ************************************ 00:10:49.587 15:06:27 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:49.845 NVMe Error Injection test 00:10:49.845 Attached to 0000:00:10.0 00:10:49.845 Attached to 0000:00:11.0 00:10:49.845 Attached to 0000:00:13.0 00:10:49.845 Attached to 0000:00:12.0 00:10:49.845 0000:00:12.0: get features failed as expected 00:10:49.845 0000:00:10.0: get features failed as expected 00:10:49.845 0000:00:11.0: get features failed as expected 00:10:49.845 0000:00:13.0: get features failed as expected 00:10:49.845 0000:00:10.0: get features successfully as expected 00:10:49.845 0000:00:11.0: get features successfully as expected 00:10:49.845 0000:00:13.0: get features successfully as expected 00:10:49.845 0000:00:12.0: get features successfully as expected 00:10:49.845 0000:00:11.0: read failed as expected 00:10:49.845 0000:00:13.0: read failed as expected 00:10:49.845 0000:00:10.0: read failed as expected 00:10:49.845 0000:00:12.0: read failed as expected 00:10:49.845 0000:00:11.0: read successfully as expected 00:10:49.845 0000:00:10.0: read successfully as expected 00:10:49.845 0000:00:13.0: read successfully as expected 00:10:49.845 0000:00:12.0: read successfully as expected 00:10:49.845 Cleaning up... 00:10:49.845 00:10:49.845 real 0m0.250s 00:10:49.845 user 0m0.098s 00:10:49.845 sys 0m0.112s 00:10:49.845 15:06:27 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.845 15:06:27 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:49.845 ************************************ 00:10:49.845 END TEST nvme_err_injection 00:10:49.845 ************************************ 00:10:50.104 15:06:27 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:50.104 15:06:27 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:50.104 15:06:27 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:50.104 15:06:27 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.104 15:06:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:50.104 ************************************ 00:10:50.104 START TEST nvme_overhead 00:10:50.104 ************************************ 00:10:50.104 15:06:27 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:51.500 Initializing NVMe Controllers 00:10:51.500 Attached to 0000:00:10.0 00:10:51.500 Attached to 0000:00:11.0 00:10:51.500 Attached to 0000:00:13.0 00:10:51.500 Attached to 0000:00:12.0 00:10:51.500 Initialization complete. Launching workers. 00:10:51.500 submit (in ns) avg, min, max = 12595.4, 8603.5, 52586.9 00:10:51.500 complete (in ns) avg, min, max = 7466.0, 5518.8, 57675.1 00:10:51.500 00:10:51.500 Submit histogram 00:10:51.500 ================ 00:10:51.500 Range in us Cumulative Count 00:10:51.500 8.552 - 8.608: 0.0133% ( 1) 00:10:51.500 8.776 - 8.831: 0.0267% ( 1) 00:10:51.500 8.831 - 8.887: 0.0400% ( 1) 00:10:51.500 8.887 - 8.943: 0.0533% ( 1) 00:10:51.500 8.999 - 9.055: 0.0667% ( 1) 00:10:51.500 9.167 - 9.223: 0.0800% ( 1) 00:10:51.500 9.334 - 9.390: 0.0933% ( 1) 00:10:51.500 9.390 - 9.446: 0.1333% ( 3) 00:10:51.500 10.005 - 10.061: 0.1600% ( 2) 00:10:51.500 10.173 - 10.229: 0.1733% ( 1) 00:10:51.500 10.229 - 10.285: 0.2266% ( 4) 00:10:51.500 10.285 - 10.341: 0.3333% ( 8) 00:10:51.500 10.341 - 10.397: 0.4399% ( 8) 00:10:51.500 10.397 - 10.452: 0.6266% ( 14) 00:10:51.500 10.452 - 10.508: 0.7599% ( 10) 00:10:51.500 10.508 - 10.564: 1.0799% ( 24) 00:10:51.500 10.564 - 10.620: 1.4398% ( 27) 00:10:51.500 10.620 - 10.676: 2.0797% ( 48) 00:10:51.500 10.676 - 10.732: 3.0129% ( 70) 00:10:51.500 10.732 - 10.788: 4.0661% ( 79) 00:10:51.500 10.788 - 10.844: 5.2793% ( 91) 00:10:51.500 10.844 - 10.900: 6.7191% ( 108) 00:10:51.500 10.900 - 10.955: 8.2122% ( 112) 00:10:51.500 10.955 - 11.011: 9.8920% ( 126) 00:10:51.500 11.011 - 11.067: 12.0917% ( 165) 00:10:51.500 11.067 - 11.123: 13.9181% ( 137) 00:10:51.500 11.123 - 11.179: 15.8646% ( 146) 00:10:51.500 11.179 - 11.235: 17.9176% ( 154) 00:10:51.500 11.235 - 11.291: 20.2240% ( 173) 00:10:51.500 11.291 - 11.347: 22.5970% ( 178) 00:10:51.500 11.347 - 11.403: 24.8900% ( 172) 00:10:51.500 11.403 - 11.459: 27.2230% ( 175) 00:10:51.500 11.459 - 11.514: 29.2228% ( 150) 00:10:51.500 11.514 - 11.570: 31.2892% ( 155) 00:10:51.500 11.570 - 11.626: 32.9823% ( 127) 00:10:51.500 11.626 - 11.682: 34.6221% ( 123) 00:10:51.500 11.682 - 11.738: 36.6218% ( 150) 00:10:51.500 11.738 - 11.794: 38.4482% ( 137) 00:10:51.500 11.794 - 11.850: 40.3813% ( 145) 00:10:51.500 11.850 - 11.906: 42.2877% ( 143) 00:10:51.500 11.906 - 11.962: 44.6740% ( 179) 00:10:51.500 11.962 - 12.017: 46.8338% ( 162) 00:10:51.500 12.017 - 12.073: 49.0601% ( 167) 00:10:51.500 12.073 - 12.129: 51.3132% ( 169) 00:10:51.500 12.129 - 12.185: 53.3396% ( 152) 00:10:51.500 12.185 - 12.241: 55.7126% ( 178) 00:10:51.500 12.241 - 12.297: 57.6990% ( 149) 00:10:51.500 12.297 - 12.353: 59.9120% ( 166) 00:10:51.500 12.353 - 12.409: 62.1117% ( 165) 00:10:51.500 12.409 - 12.465: 63.9248% ( 136) 00:10:51.500 12.465 - 12.521: 65.7779% ( 139) 00:10:51.500 12.521 - 12.576: 67.6310% ( 139) 00:10:51.500 12.576 - 12.632: 69.3774% ( 131) 00:10:51.500 12.632 - 12.688: 71.1239% ( 131) 00:10:51.500 12.688 - 12.744: 72.6037% ( 111) 00:10:51.500 12.744 - 12.800: 73.7368% ( 85) 00:10:51.500 12.800 - 12.856: 74.7900% ( 79) 00:10:51.500 12.856 - 12.912: 75.6699% ( 66) 00:10:51.500 12.912 - 12.968: 76.8831% ( 91) 00:10:51.500 12.968 - 13.024: 77.8296% ( 71) 00:10:51.500 13.024 - 13.079: 78.6962% ( 65) 00:10:51.500 13.079 - 13.135: 79.7094% ( 76) 00:10:51.500 13.135 - 13.191: 80.6559% ( 71) 00:10:51.500 13.191 - 13.247: 81.4558% ( 60) 00:10:51.500 13.247 - 13.303: 82.3357% ( 66) 00:10:51.500 13.303 - 13.359: 83.3222% ( 74) 00:10:51.500 13.359 - 13.415: 84.1888% ( 65) 00:10:51.500 13.415 - 13.471: 84.8554% ( 50) 00:10:51.500 13.471 - 13.527: 85.4686% ( 46) 00:10:51.500 13.527 - 13.583: 85.9885% ( 39) 00:10:51.500 13.583 - 13.638: 86.4551% ( 35) 00:10:51.500 13.638 - 13.694: 86.8284% ( 28) 00:10:51.500 13.694 - 13.750: 87.1350% ( 23) 00:10:51.500 13.750 - 13.806: 87.4817% ( 26) 00:10:51.500 13.806 - 13.862: 87.6950% ( 16) 00:10:51.500 13.862 - 13.918: 87.8816% ( 14) 00:10:51.500 13.918 - 13.974: 88.1083% ( 17) 00:10:51.500 13.974 - 14.030: 88.3082% ( 15) 00:10:51.500 14.030 - 14.086: 88.3882% ( 6) 00:10:51.500 14.086 - 14.141: 88.5882% ( 15) 00:10:51.500 14.141 - 14.197: 88.6815% ( 7) 00:10:51.500 14.197 - 14.253: 88.7482% ( 5) 00:10:51.500 14.253 - 14.309: 88.8548% ( 8) 00:10:51.500 14.309 - 14.421: 89.1481% ( 22) 00:10:51.500 14.421 - 14.533: 89.6280% ( 36) 00:10:51.500 14.533 - 14.645: 90.2680% ( 48) 00:10:51.500 14.645 - 14.756: 90.9079% ( 48) 00:10:51.500 14.756 - 14.868: 91.3745% ( 35) 00:10:51.500 14.868 - 14.980: 91.8278% ( 34) 00:10:51.500 14.980 - 15.092: 92.2010% ( 28) 00:10:51.500 15.092 - 15.203: 92.5477% ( 26) 00:10:51.500 15.203 - 15.315: 92.7610% ( 16) 00:10:51.500 15.315 - 15.427: 93.0143% ( 19) 00:10:51.500 15.427 - 15.539: 93.2276% ( 16) 00:10:51.500 15.539 - 15.651: 93.4409% ( 16) 00:10:51.500 15.651 - 15.762: 93.6408% ( 15) 00:10:51.500 15.762 - 15.874: 93.8542% ( 16) 00:10:51.500 15.874 - 15.986: 94.1075% ( 19) 00:10:51.500 15.986 - 16.098: 94.4007% ( 22) 00:10:51.500 16.098 - 16.210: 94.6274% ( 17) 00:10:51.500 16.210 - 16.321: 94.9073% ( 21) 00:10:51.500 16.321 - 16.433: 95.1207% ( 16) 00:10:51.500 16.433 - 16.545: 95.3473% ( 17) 00:10:51.500 16.545 - 16.657: 95.5606% ( 16) 00:10:51.500 16.657 - 16.769: 95.8139% ( 19) 00:10:51.500 16.769 - 16.880: 96.0272% ( 16) 00:10:51.500 16.880 - 16.992: 96.1738% ( 11) 00:10:51.500 16.992 - 17.104: 96.3472% ( 13) 00:10:51.500 17.104 - 17.216: 96.5471% ( 15) 00:10:51.500 17.216 - 17.328: 96.7204% ( 13) 00:10:51.500 17.328 - 17.439: 96.8937% ( 13) 00:10:51.500 17.439 - 17.551: 97.0404% ( 11) 00:10:51.500 17.551 - 17.663: 97.0937% ( 4) 00:10:51.500 17.663 - 17.775: 97.1604% ( 5) 00:10:51.500 17.775 - 17.886: 97.2537% ( 7) 00:10:51.500 17.886 - 17.998: 97.3204% ( 5) 00:10:51.500 17.998 - 18.110: 97.4003% ( 6) 00:10:51.500 18.110 - 18.222: 97.4803% ( 6) 00:10:51.500 18.222 - 18.334: 97.5337% ( 4) 00:10:51.500 18.334 - 18.445: 97.6003% ( 5) 00:10:51.500 18.445 - 18.557: 97.6536% ( 4) 00:10:51.500 18.557 - 18.669: 97.7070% ( 4) 00:10:51.500 18.669 - 18.781: 97.7870% ( 6) 00:10:51.500 18.781 - 18.893: 97.8536% ( 5) 00:10:51.500 18.893 - 19.004: 97.8936% ( 3) 00:10:51.500 19.004 - 19.116: 97.9336% ( 3) 00:10:51.500 19.116 - 19.228: 98.0269% ( 7) 00:10:51.500 19.228 - 19.340: 98.1069% ( 6) 00:10:51.500 19.340 - 19.452: 98.1203% ( 1) 00:10:51.500 19.452 - 19.563: 98.1602% ( 3) 00:10:51.500 19.563 - 19.675: 98.1736% ( 1) 00:10:51.500 19.675 - 19.787: 98.2002% ( 2) 00:10:51.500 19.787 - 19.899: 98.2402% ( 3) 00:10:51.500 20.234 - 20.346: 98.2669% ( 2) 00:10:51.500 20.458 - 20.569: 98.2936% ( 2) 00:10:51.500 20.793 - 20.905: 98.3069% ( 1) 00:10:51.500 20.905 - 21.017: 98.3202% ( 1) 00:10:51.500 21.017 - 21.128: 98.3336% ( 1) 00:10:51.500 21.128 - 21.240: 98.3736% ( 3) 00:10:51.500 21.240 - 21.352: 98.3869% ( 1) 00:10:51.500 21.464 - 21.576: 98.4002% ( 1) 00:10:51.500 21.687 - 21.799: 98.4135% ( 1) 00:10:51.500 21.799 - 21.911: 98.4269% ( 1) 00:10:51.500 22.023 - 22.134: 98.4935% ( 5) 00:10:51.500 22.134 - 22.246: 98.5202% ( 2) 00:10:51.500 22.358 - 22.470: 98.5602% ( 3) 00:10:51.500 22.470 - 22.582: 98.5869% ( 2) 00:10:51.500 22.582 - 22.693: 98.6135% ( 2) 00:10:51.500 22.693 - 22.805: 98.6935% ( 6) 00:10:51.500 22.805 - 22.917: 98.7468% ( 4) 00:10:51.500 22.917 - 23.029: 98.7735% ( 2) 00:10:51.500 23.029 - 23.141: 98.8268% ( 4) 00:10:51.500 23.141 - 23.252: 98.8801% ( 4) 00:10:51.500 23.252 - 23.364: 98.9601% ( 6) 00:10:51.500 23.364 - 23.476: 99.0268% ( 5) 00:10:51.500 23.476 - 23.588: 99.0535% ( 2) 00:10:51.500 23.588 - 23.700: 99.1068% ( 4) 00:10:51.500 23.700 - 23.811: 99.1201% ( 1) 00:10:51.500 23.811 - 23.923: 99.1601% ( 3) 00:10:51.500 23.923 - 24.035: 99.1868% ( 2) 00:10:51.500 24.035 - 24.147: 99.2668% ( 6) 00:10:51.500 24.147 - 24.259: 99.2801% ( 1) 00:10:51.500 24.259 - 24.370: 99.3068% ( 2) 00:10:51.501 24.370 - 24.482: 99.3201% ( 1) 00:10:51.501 24.482 - 24.594: 99.3468% ( 2) 00:10:51.501 24.594 - 24.706: 99.3734% ( 2) 00:10:51.501 24.817 - 24.929: 99.3867% ( 1) 00:10:51.501 24.929 - 25.041: 99.4001% ( 1) 00:10:51.501 25.041 - 25.153: 99.4134% ( 1) 00:10:51.501 25.153 - 25.265: 99.4267% ( 1) 00:10:51.501 25.265 - 25.376: 99.4401% ( 1) 00:10:51.501 25.600 - 25.712: 99.4534% ( 1) 00:10:51.501 25.935 - 26.047: 99.4801% ( 2) 00:10:51.501 26.494 - 26.606: 99.4934% ( 1) 00:10:51.501 26.606 - 26.718: 99.5067% ( 1) 00:10:51.501 26.718 - 26.830: 99.5467% ( 3) 00:10:51.501 26.830 - 26.941: 99.5601% ( 1) 00:10:51.501 27.053 - 27.165: 99.5734% ( 1) 00:10:51.501 27.165 - 27.277: 99.5867% ( 1) 00:10:51.501 27.277 - 27.389: 99.6001% ( 1) 00:10:51.501 27.389 - 27.500: 99.6134% ( 1) 00:10:51.501 27.500 - 27.612: 99.6267% ( 1) 00:10:51.501 27.612 - 27.724: 99.6400% ( 1) 00:10:51.501 27.724 - 27.836: 99.6800% ( 3) 00:10:51.501 27.836 - 27.948: 99.7067% ( 2) 00:10:51.501 27.948 - 28.059: 99.7200% ( 1) 00:10:51.501 28.059 - 28.171: 99.7334% ( 1) 00:10:51.501 28.171 - 28.283: 99.7467% ( 1) 00:10:51.501 28.283 - 28.395: 99.7734% ( 2) 00:10:51.501 28.395 - 28.507: 99.7867% ( 1) 00:10:51.501 28.507 - 28.618: 99.8000% ( 1) 00:10:51.501 28.618 - 28.842: 99.8267% ( 2) 00:10:51.501 28.842 - 29.066: 99.8400% ( 1) 00:10:51.501 29.289 - 29.513: 99.8667% ( 2) 00:10:51.501 29.513 - 29.736: 99.8800% ( 1) 00:10:51.501 29.736 - 29.960: 99.8933% ( 1) 00:10:51.501 30.183 - 30.407: 99.9200% ( 2) 00:10:51.501 30.854 - 31.078: 99.9600% ( 3) 00:10:51.501 31.078 - 31.301: 99.9733% ( 1) 00:10:51.501 44.269 - 44.493: 99.9867% ( 1) 00:10:51.501 52.541 - 52.765: 100.0000% ( 1) 00:10:51.501 00:10:51.501 Complete histogram 00:10:51.501 ================== 00:10:51.501 Range in us Cumulative Count 00:10:51.501 5.506 - 5.534: 0.0267% ( 2) 00:10:51.501 5.534 - 5.562: 0.0800% ( 4) 00:10:51.501 5.562 - 5.590: 0.1733% ( 7) 00:10:51.501 5.590 - 5.617: 0.2800% ( 8) 00:10:51.501 5.617 - 5.645: 0.3333% ( 4) 00:10:51.501 5.645 - 5.673: 0.4133% ( 6) 00:10:51.501 5.673 - 5.701: 0.5599% ( 11) 00:10:51.501 5.701 - 5.729: 0.7066% ( 11) 00:10:51.501 5.729 - 5.757: 0.7732% ( 5) 00:10:51.501 5.757 - 5.785: 0.9332% ( 12) 00:10:51.501 5.785 - 5.813: 1.2398% ( 23) 00:10:51.501 5.813 - 5.841: 1.8797% ( 48) 00:10:51.501 5.841 - 5.869: 2.8929% ( 76) 00:10:51.501 5.869 - 5.897: 4.0128% ( 84) 00:10:51.501 5.897 - 5.925: 5.1993% ( 89) 00:10:51.501 5.925 - 5.953: 6.3592% ( 87) 00:10:51.501 5.953 - 5.981: 7.5590% ( 90) 00:10:51.501 5.981 - 6.009: 8.6522% ( 82) 00:10:51.501 6.009 - 6.037: 10.1986% ( 116) 00:10:51.501 6.037 - 6.065: 11.5051% ( 98) 00:10:51.501 6.065 - 6.093: 12.4383% ( 70) 00:10:51.501 6.093 - 6.121: 13.4115% ( 73) 00:10:51.501 6.121 - 6.148: 14.5447% ( 85) 00:10:51.501 6.148 - 6.176: 15.6246% ( 81) 00:10:51.501 6.176 - 6.204: 16.6111% ( 74) 00:10:51.501 6.204 - 6.232: 17.6776% ( 80) 00:10:51.501 6.232 - 6.260: 18.8508% ( 88) 00:10:51.501 6.260 - 6.288: 19.8374% ( 74) 00:10:51.501 6.288 - 6.316: 20.8772% ( 78) 00:10:51.501 6.316 - 6.344: 21.9437% ( 80) 00:10:51.501 6.344 - 6.372: 22.9836% ( 78) 00:10:51.501 6.372 - 6.400: 24.1301% ( 86) 00:10:51.501 6.400 - 6.428: 25.4366% ( 98) 00:10:51.501 6.428 - 6.456: 26.8498% ( 106) 00:10:51.501 6.456 - 6.484: 28.5429% ( 127) 00:10:51.501 6.484 - 6.512: 30.2626% ( 129) 00:10:51.501 6.512 - 6.540: 32.0624% ( 135) 00:10:51.501 6.540 - 6.568: 33.8222% ( 132) 00:10:51.501 6.568 - 6.596: 35.8219% ( 150) 00:10:51.501 6.596 - 6.624: 37.7283% ( 143) 00:10:51.501 6.624 - 6.652: 39.4214% ( 127) 00:10:51.501 6.652 - 6.679: 40.9679% ( 116) 00:10:51.501 6.679 - 6.707: 42.4610% ( 112) 00:10:51.501 6.707 - 6.735: 44.3141% ( 139) 00:10:51.501 6.735 - 6.763: 46.0072% ( 127) 00:10:51.501 6.763 - 6.791: 47.5803% ( 118) 00:10:51.501 6.791 - 6.819: 48.7002% ( 84) 00:10:51.501 6.819 - 6.847: 50.0333% ( 100) 00:10:51.501 6.847 - 6.875: 51.1798% ( 86) 00:10:51.501 6.875 - 6.903: 52.3797% ( 90) 00:10:51.501 6.903 - 6.931: 53.6462% ( 95) 00:10:51.501 6.931 - 6.959: 54.5527% ( 68) 00:10:51.501 6.959 - 6.987: 55.5126% ( 72) 00:10:51.501 6.987 - 7.015: 56.3791% ( 65) 00:10:51.501 7.015 - 7.043: 57.4190% ( 78) 00:10:51.501 7.043 - 7.071: 58.2856% ( 65) 00:10:51.501 7.071 - 7.099: 59.2721% ( 74) 00:10:51.501 7.099 - 7.127: 60.2053% ( 70) 00:10:51.501 7.127 - 7.155: 61.1785% ( 73) 00:10:51.501 7.155 - 7.210: 63.5782% ( 180) 00:10:51.501 7.210 - 7.266: 66.1645% ( 194) 00:10:51.501 7.266 - 7.322: 68.6042% ( 183) 00:10:51.501 7.322 - 7.378: 70.7772% ( 163) 00:10:51.501 7.378 - 7.434: 72.6037% ( 137) 00:10:51.501 7.434 - 7.490: 74.2168% ( 121) 00:10:51.501 7.490 - 7.546: 75.4433% ( 92) 00:10:51.501 7.546 - 7.602: 76.7631% ( 99) 00:10:51.501 7.602 - 7.658: 78.0563% ( 97) 00:10:51.501 7.658 - 7.714: 79.1095% ( 79) 00:10:51.501 7.714 - 7.769: 80.0560% ( 71) 00:10:51.501 7.769 - 7.825: 80.8426% ( 59) 00:10:51.501 7.825 - 7.881: 81.5758% ( 55) 00:10:51.501 7.881 - 7.937: 82.4290% ( 64) 00:10:51.501 7.937 - 7.993: 83.1356% ( 53) 00:10:51.501 7.993 - 8.049: 83.8688% ( 55) 00:10:51.501 8.049 - 8.105: 84.8020% ( 70) 00:10:51.501 8.105 - 8.161: 85.8286% ( 77) 00:10:51.501 8.161 - 8.217: 86.6284% ( 60) 00:10:51.501 8.217 - 8.272: 87.2150% ( 44) 00:10:51.501 8.272 - 8.328: 87.8949% ( 51) 00:10:51.501 8.328 - 8.384: 88.4949% ( 45) 00:10:51.501 8.384 - 8.440: 88.8548% ( 27) 00:10:51.501 8.440 - 8.496: 89.1881% ( 25) 00:10:51.501 8.496 - 8.552: 89.5081% ( 24) 00:10:51.501 8.552 - 8.608: 89.7214% ( 16) 00:10:51.501 8.608 - 8.664: 89.8147% ( 7) 00:10:51.501 8.664 - 8.720: 89.9480% ( 10) 00:10:51.501 8.720 - 8.776: 90.0413% ( 7) 00:10:51.501 8.776 - 8.831: 90.1746% ( 10) 00:10:51.501 8.831 - 8.887: 90.2546% ( 6) 00:10:51.501 8.887 - 8.943: 90.3613% ( 8) 00:10:51.501 8.943 - 8.999: 90.4679% ( 8) 00:10:51.501 8.999 - 9.055: 90.6946% ( 17) 00:10:51.501 9.055 - 9.111: 91.0012% ( 23) 00:10:51.501 9.111 - 9.167: 91.3745% ( 28) 00:10:51.501 9.167 - 9.223: 91.8544% ( 36) 00:10:51.501 9.223 - 9.279: 92.3077% ( 34) 00:10:51.501 9.279 - 9.334: 92.5743% ( 20) 00:10:51.501 9.334 - 9.390: 92.9609% ( 29) 00:10:51.501 9.390 - 9.446: 93.2142% ( 19) 00:10:51.501 9.446 - 9.502: 93.3742% ( 12) 00:10:51.501 9.502 - 9.558: 93.5609% ( 14) 00:10:51.501 9.558 - 9.614: 93.6542% ( 7) 00:10:51.501 9.614 - 9.670: 93.7608% ( 8) 00:10:51.501 9.670 - 9.726: 93.9075% ( 11) 00:10:51.501 9.726 - 9.782: 94.0541% ( 11) 00:10:51.501 9.782 - 9.838: 94.2408% ( 14) 00:10:51.501 9.838 - 9.893: 94.3074% ( 5) 00:10:51.501 9.893 - 9.949: 94.4541% ( 11) 00:10:51.501 9.949 - 10.005: 94.5474% ( 7) 00:10:51.501 10.005 - 10.061: 94.6141% ( 5) 00:10:51.501 10.061 - 10.117: 94.7340% ( 9) 00:10:51.501 10.117 - 10.173: 94.7874% ( 4) 00:10:51.501 10.173 - 10.229: 94.8674% ( 6) 00:10:51.501 10.229 - 10.285: 94.9607% ( 7) 00:10:51.501 10.285 - 10.341: 95.0540% ( 7) 00:10:51.501 10.341 - 10.397: 95.0807% ( 2) 00:10:51.501 10.397 - 10.452: 95.1606% ( 6) 00:10:51.501 10.452 - 10.508: 95.2140% ( 4) 00:10:51.501 10.508 - 10.564: 95.2406% ( 2) 00:10:51.501 10.564 - 10.620: 95.3073% ( 5) 00:10:51.501 10.620 - 10.676: 95.3606% ( 4) 00:10:51.501 10.676 - 10.732: 95.4273% ( 5) 00:10:51.501 10.732 - 10.788: 95.4539% ( 2) 00:10:51.501 10.788 - 10.844: 95.5339% ( 6) 00:10:51.501 10.844 - 10.900: 95.6006% ( 5) 00:10:51.501 10.900 - 10.955: 95.6806% ( 6) 00:10:51.501 10.955 - 11.011: 95.7739% ( 7) 00:10:51.501 11.011 - 11.067: 95.8006% ( 2) 00:10:51.501 11.067 - 11.123: 95.8539% ( 4) 00:10:51.501 11.123 - 11.179: 95.9072% ( 4) 00:10:51.501 11.179 - 11.235: 95.9739% ( 5) 00:10:51.501 11.291 - 11.347: 95.9872% ( 1) 00:10:51.501 11.347 - 11.403: 96.0139% ( 2) 00:10:51.501 11.403 - 11.459: 96.0405% ( 2) 00:10:51.501 11.514 - 11.570: 96.0672% ( 2) 00:10:51.501 11.570 - 11.626: 96.1205% ( 4) 00:10:51.501 11.626 - 11.682: 96.1605% ( 3) 00:10:51.501 11.682 - 11.738: 96.2005% ( 3) 00:10:51.501 11.738 - 11.794: 96.2138% ( 1) 00:10:51.501 11.794 - 11.850: 96.2538% ( 3) 00:10:51.501 11.906 - 11.962: 96.2672% ( 1) 00:10:51.501 11.962 - 12.017: 96.2805% ( 1) 00:10:51.501 12.017 - 12.073: 96.2938% ( 1) 00:10:51.501 12.073 - 12.129: 96.3205% ( 2) 00:10:51.501 12.129 - 12.185: 96.3338% ( 1) 00:10:51.501 12.185 - 12.241: 96.3472% ( 1) 00:10:51.501 12.241 - 12.297: 96.3738% ( 2) 00:10:51.501 12.297 - 12.353: 96.3871% ( 1) 00:10:51.501 12.353 - 12.409: 96.4138% ( 2) 00:10:51.501 12.409 - 12.465: 96.4538% ( 3) 00:10:51.501 12.465 - 12.521: 96.4805% ( 2) 00:10:51.501 12.521 - 12.576: 96.5071% ( 2) 00:10:51.502 12.576 - 12.632: 96.5205% ( 1) 00:10:51.502 12.688 - 12.744: 96.5338% ( 1) 00:10:51.502 12.744 - 12.800: 96.5471% ( 1) 00:10:51.502 12.800 - 12.856: 96.5738% ( 2) 00:10:51.502 12.912 - 12.968: 96.5871% ( 1) 00:10:51.502 12.968 - 13.024: 96.6005% ( 1) 00:10:51.502 13.024 - 13.079: 96.6271% ( 2) 00:10:51.502 13.079 - 13.135: 96.6538% ( 2) 00:10:51.502 13.191 - 13.247: 96.6671% ( 1) 00:10:51.502 13.247 - 13.303: 96.6804% ( 1) 00:10:51.502 13.303 - 13.359: 96.6938% ( 1) 00:10:51.502 13.359 - 13.415: 96.7071% ( 1) 00:10:51.502 13.527 - 13.583: 96.7338% ( 2) 00:10:51.502 13.638 - 13.694: 96.7471% ( 1) 00:10:51.502 13.694 - 13.750: 96.7604% ( 1) 00:10:51.502 13.750 - 13.806: 96.7871% ( 2) 00:10:51.502 13.806 - 13.862: 96.8004% ( 1) 00:10:51.502 13.974 - 14.030: 96.8138% ( 1) 00:10:51.502 14.030 - 14.086: 96.8271% ( 1) 00:10:51.502 14.086 - 14.141: 96.8404% ( 1) 00:10:51.502 14.197 - 14.253: 96.8671% ( 2) 00:10:51.502 14.309 - 14.421: 96.8804% ( 1) 00:10:51.502 14.645 - 14.756: 96.9071% ( 2) 00:10:51.502 14.980 - 15.092: 96.9337% ( 2) 00:10:51.502 15.203 - 15.315: 96.9737% ( 3) 00:10:51.502 15.315 - 15.427: 96.9871% ( 1) 00:10:51.502 15.427 - 15.539: 97.0004% ( 1) 00:10:51.502 15.539 - 15.651: 97.0671% ( 5) 00:10:51.502 15.651 - 15.762: 97.1204% ( 4) 00:10:51.502 15.762 - 15.874: 97.1604% ( 3) 00:10:51.502 15.874 - 15.986: 97.2670% ( 8) 00:10:51.502 15.986 - 16.098: 97.3070% ( 3) 00:10:51.502 16.098 - 16.210: 97.3470% ( 3) 00:10:51.502 16.210 - 16.321: 97.3737% ( 2) 00:10:51.502 16.321 - 16.433: 97.4670% ( 7) 00:10:51.502 16.433 - 16.545: 97.5337% ( 5) 00:10:51.502 16.545 - 16.657: 97.6270% ( 7) 00:10:51.502 16.657 - 16.769: 97.6670% ( 3) 00:10:51.502 16.769 - 16.880: 97.7070% ( 3) 00:10:51.502 16.880 - 16.992: 97.7870% ( 6) 00:10:51.502 16.992 - 17.104: 97.9203% ( 10) 00:10:51.502 17.104 - 17.216: 97.9736% ( 4) 00:10:51.502 17.216 - 17.328: 98.0003% ( 2) 00:10:51.502 17.328 - 17.439: 98.0536% ( 4) 00:10:51.502 17.439 - 17.551: 98.1069% ( 4) 00:10:51.502 17.551 - 17.663: 98.1469% ( 3) 00:10:51.502 17.663 - 17.775: 98.1869% ( 3) 00:10:51.502 17.775 - 17.886: 98.2269% ( 3) 00:10:51.502 17.886 - 17.998: 98.2669% ( 3) 00:10:51.502 17.998 - 18.110: 98.3602% ( 7) 00:10:51.502 18.110 - 18.222: 98.4535% ( 7) 00:10:51.502 18.222 - 18.334: 98.5335% ( 6) 00:10:51.502 18.334 - 18.445: 98.6268% ( 7) 00:10:51.502 18.445 - 18.557: 98.7468% ( 9) 00:10:51.502 18.557 - 18.669: 98.8002% ( 4) 00:10:51.502 18.669 - 18.781: 98.8268% ( 2) 00:10:51.502 18.781 - 18.893: 98.9201% ( 7) 00:10:51.502 19.004 - 19.116: 98.9335% ( 1) 00:10:51.502 19.116 - 19.228: 98.9735% ( 3) 00:10:51.502 19.228 - 19.340: 98.9868% ( 1) 00:10:51.502 19.340 - 19.452: 99.0001% ( 1) 00:10:51.502 19.675 - 19.787: 99.0668% ( 5) 00:10:51.502 19.787 - 19.899: 99.0801% ( 1) 00:10:51.502 19.899 - 20.010: 99.1068% ( 2) 00:10:51.502 20.010 - 20.122: 99.1334% ( 2) 00:10:51.502 20.122 - 20.234: 99.1468% ( 1) 00:10:51.502 20.234 - 20.346: 99.1868% ( 3) 00:10:51.502 20.346 - 20.458: 99.2268% ( 3) 00:10:51.502 20.458 - 20.569: 99.2401% ( 1) 00:10:51.502 20.569 - 20.681: 99.2668% ( 2) 00:10:51.502 20.681 - 20.793: 99.2934% ( 2) 00:10:51.502 20.793 - 20.905: 99.3068% ( 1) 00:10:51.502 21.017 - 21.128: 99.3201% ( 1) 00:10:51.502 21.240 - 21.352: 99.3334% ( 1) 00:10:51.502 21.352 - 21.464: 99.3468% ( 1) 00:10:51.502 22.023 - 22.134: 99.3601% ( 1) 00:10:51.502 22.358 - 22.470: 99.3734% ( 1) 00:10:51.502 22.470 - 22.582: 99.4134% ( 3) 00:10:51.502 22.582 - 22.693: 99.4267% ( 1) 00:10:51.502 22.693 - 22.805: 99.4401% ( 1) 00:10:51.502 22.805 - 22.917: 99.4667% ( 2) 00:10:51.502 23.141 - 23.252: 99.4801% ( 1) 00:10:51.502 23.252 - 23.364: 99.5467% ( 5) 00:10:51.502 23.364 - 23.476: 99.5601% ( 1) 00:10:51.502 23.476 - 23.588: 99.5867% ( 2) 00:10:51.502 23.588 - 23.700: 99.6400% ( 4) 00:10:51.502 23.700 - 23.811: 99.6534% ( 1) 00:10:51.502 23.811 - 23.923: 99.6800% ( 2) 00:10:51.502 23.923 - 24.035: 99.7200% ( 3) 00:10:51.502 24.147 - 24.259: 99.7600% ( 3) 00:10:51.502 24.259 - 24.370: 99.7867% ( 2) 00:10:51.502 24.370 - 24.482: 99.8267% ( 3) 00:10:51.502 24.482 - 24.594: 99.8400% ( 1) 00:10:51.502 24.929 - 25.041: 99.8534% ( 1) 00:10:51.502 25.376 - 25.488: 99.8800% ( 2) 00:10:51.502 30.631 - 30.854: 99.8933% ( 1) 00:10:51.502 31.748 - 31.972: 99.9067% ( 1) 00:10:51.502 32.196 - 32.419: 99.9200% ( 1) 00:10:51.502 32.419 - 32.643: 99.9333% ( 1) 00:10:51.502 33.984 - 34.208: 99.9467% ( 1) 00:10:51.502 35.549 - 35.773: 99.9600% ( 1) 00:10:51.502 40.692 - 40.915: 99.9733% ( 1) 00:10:51.502 46.505 - 46.728: 99.9867% ( 1) 00:10:51.502 57.237 - 57.684: 100.0000% ( 1) 00:10:51.502 00:10:51.502 00:10:51.502 real 0m1.255s 00:10:51.502 user 0m1.087s 00:10:51.502 sys 0m0.123s 00:10:51.502 15:06:29 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:51.502 15:06:29 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:51.502 ************************************ 00:10:51.502 END TEST nvme_overhead 00:10:51.502 ************************************ 00:10:51.502 15:06:29 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:51.502 15:06:29 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:51.502 15:06:29 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:51.502 15:06:29 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.502 15:06:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:51.502 ************************************ 00:10:51.502 START TEST nvme_arbitration 00:10:51.502 ************************************ 00:10:51.502 15:06:29 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:54.829 Initializing NVMe Controllers 00:10:54.829 Attached to 0000:00:10.0 00:10:54.829 Attached to 0000:00:11.0 00:10:54.829 Attached to 0000:00:13.0 00:10:54.829 Attached to 0000:00:12.0 00:10:54.829 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:54.829 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:54.829 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:54.829 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:54.829 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:54.829 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:54.829 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:54.830 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:54.830 Initialization complete. Launching workers. 00:10:54.830 Starting thread on core 1 with urgent priority queue 00:10:54.830 Starting thread on core 2 with urgent priority queue 00:10:54.830 Starting thread on core 3 with urgent priority queue 00:10:54.830 Starting thread on core 0 with urgent priority queue 00:10:54.830 QEMU NVMe Ctrl (12340 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:10:54.830 QEMU NVMe Ctrl (12342 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:10:54.830 QEMU NVMe Ctrl (12341 ) core 1: 533.33 IO/s 187.50 secs/100000 ios 00:10:54.830 QEMU NVMe Ctrl (12342 ) core 1: 533.33 IO/s 187.50 secs/100000 ios 00:10:54.830 QEMU NVMe Ctrl (12343 ) core 2: 533.33 IO/s 187.50 secs/100000 ios 00:10:54.830 QEMU NVMe Ctrl (12342 ) core 3: 490.67 IO/s 203.80 secs/100000 ios 00:10:54.830 ======================================================== 00:10:54.830 00:10:54.830 00:10:54.830 real 0m3.408s 00:10:54.830 user 0m9.436s 00:10:54.830 sys 0m0.140s 00:10:54.830 15:06:32 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.830 15:06:32 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:54.830 ************************************ 00:10:54.830 END TEST nvme_arbitration 00:10:54.830 ************************************ 00:10:54.830 15:06:32 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:54.830 15:06:32 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:54.830 15:06:32 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:54.830 15:06:32 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.830 15:06:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:54.830 ************************************ 00:10:54.830 START TEST nvme_single_aen 00:10:54.830 ************************************ 00:10:54.830 15:06:32 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:55.087 Asynchronous Event Request test 00:10:55.088 Attached to 0000:00:10.0 00:10:55.088 Attached to 0000:00:11.0 00:10:55.088 Attached to 0000:00:13.0 00:10:55.088 Attached to 0000:00:12.0 00:10:55.088 Reset controller to setup AER completions for this process 00:10:55.088 Registering asynchronous event callbacks... 00:10:55.088 Getting orig temperature thresholds of all controllers 00:10:55.088 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:55.088 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:55.088 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:55.088 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:55.088 Setting all controllers temperature threshold low to trigger AER 00:10:55.088 Waiting for all controllers temperature threshold to be set lower 00:10:55.088 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:55.088 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:55.088 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:55.088 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:55.088 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:55.088 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:55.088 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:55.088 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:55.088 Waiting for all controllers to trigger AER and reset threshold 00:10:55.088 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:55.088 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:55.088 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:55.088 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:55.088 Cleaning up... 00:10:55.088 00:10:55.088 real 0m0.260s 00:10:55.088 user 0m0.087s 00:10:55.088 sys 0m0.131s 00:10:55.088 15:06:33 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.088 15:06:33 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:55.088 ************************************ 00:10:55.088 END TEST nvme_single_aen 00:10:55.088 ************************************ 00:10:55.088 15:06:33 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:55.088 15:06:33 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:55.088 15:06:33 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:55.088 15:06:33 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.088 15:06:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:55.088 ************************************ 00:10:55.088 START TEST nvme_doorbell_aers 00:10:55.088 ************************************ 00:10:55.088 15:06:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:10:55.088 15:06:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:55.088 15:06:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:55.088 15:06:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:55.088 15:06:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:55.088 15:06:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:55.088 15:06:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:10:55.088 15:06:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:55.088 15:06:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:55.088 15:06:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:55.088 15:06:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:55.088 15:06:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:55.088 15:06:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:55.088 15:06:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:55.345 [2024-07-15 15:06:33.438051] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:05.334 Executing: test_write_invalid_db 00:11:05.334 Waiting for AER completion... 00:11:05.334 Failure: test_write_invalid_db 00:11:05.334 00:11:05.334 Executing: test_invalid_db_write_overflow_sq 00:11:05.334 Waiting for AER completion... 00:11:05.334 Failure: test_invalid_db_write_overflow_sq 00:11:05.334 00:11:05.334 Executing: test_invalid_db_write_overflow_cq 00:11:05.334 Waiting for AER completion... 00:11:05.334 Failure: test_invalid_db_write_overflow_cq 00:11:05.334 00:11:05.334 15:06:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:05.334 15:06:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:05.592 [2024-07-15 15:06:43.475966] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:15.572 Executing: test_write_invalid_db 00:11:15.572 Waiting for AER completion... 00:11:15.572 Failure: test_write_invalid_db 00:11:15.572 00:11:15.572 Executing: test_invalid_db_write_overflow_sq 00:11:15.572 Waiting for AER completion... 00:11:15.572 Failure: test_invalid_db_write_overflow_sq 00:11:15.572 00:11:15.572 Executing: test_invalid_db_write_overflow_cq 00:11:15.572 Waiting for AER completion... 00:11:15.572 Failure: test_invalid_db_write_overflow_cq 00:11:15.572 00:11:15.572 15:06:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:15.572 15:06:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:15.572 [2024-07-15 15:06:53.520708] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:25.561 Executing: test_write_invalid_db 00:11:25.561 Waiting for AER completion... 00:11:25.561 Failure: test_write_invalid_db 00:11:25.561 00:11:25.561 Executing: test_invalid_db_write_overflow_sq 00:11:25.561 Waiting for AER completion... 00:11:25.561 Failure: test_invalid_db_write_overflow_sq 00:11:25.561 00:11:25.561 Executing: test_invalid_db_write_overflow_cq 00:11:25.561 Waiting for AER completion... 00:11:25.561 Failure: test_invalid_db_write_overflow_cq 00:11:25.561 00:11:25.561 15:07:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:25.561 15:07:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:25.561 [2024-07-15 15:07:03.565884] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:35.552 Executing: test_write_invalid_db 00:11:35.552 Waiting for AER completion... 00:11:35.552 Failure: test_write_invalid_db 00:11:35.552 00:11:35.552 Executing: test_invalid_db_write_overflow_sq 00:11:35.552 Waiting for AER completion... 00:11:35.552 Failure: test_invalid_db_write_overflow_sq 00:11:35.552 00:11:35.552 Executing: test_invalid_db_write_overflow_cq 00:11:35.552 Waiting for AER completion... 00:11:35.552 Failure: test_invalid_db_write_overflow_cq 00:11:35.552 00:11:35.552 00:11:35.552 real 0m40.279s 00:11:35.552 user 0m35.915s 00:11:35.552 sys 0m4.008s 00:11:35.552 15:07:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:35.552 15:07:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:11:35.552 ************************************ 00:11:35.552 END TEST nvme_doorbell_aers 00:11:35.552 ************************************ 00:11:35.552 15:07:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:35.552 15:07:13 nvme -- nvme/nvme.sh@97 -- # uname 00:11:35.552 15:07:13 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:35.552 15:07:13 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:35.552 15:07:13 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:35.552 15:07:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.552 15:07:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:35.552 ************************************ 00:11:35.552 START TEST nvme_multi_aen 00:11:35.552 ************************************ 00:11:35.552 15:07:13 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:35.552 [2024-07-15 15:07:13.626476] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:35.552 [2024-07-15 15:07:13.626560] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:35.552 [2024-07-15 15:07:13.626574] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:35.552 [2024-07-15 15:07:13.627876] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:35.552 [2024-07-15 15:07:13.627903] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:35.552 [2024-07-15 15:07:13.627913] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:35.552 [2024-07-15 15:07:13.628957] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:35.552 [2024-07-15 15:07:13.628987] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:35.552 [2024-07-15 15:07:13.629022] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:35.552 [2024-07-15 15:07:13.630040] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:35.552 [2024-07-15 15:07:13.630067] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:35.552 [2024-07-15 15:07:13.630076] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70487) is not found. Dropping the request. 00:11:35.552 Child process pid: 71007 00:11:35.812 [Child] Asynchronous Event Request test 00:11:35.812 [Child] Attached to 0000:00:10.0 00:11:35.812 [Child] Attached to 0000:00:11.0 00:11:35.812 [Child] Attached to 0000:00:13.0 00:11:35.812 [Child] Attached to 0000:00:12.0 00:11:35.812 [Child] Registering asynchronous event callbacks... 00:11:35.812 [Child] Getting orig temperature thresholds of all controllers 00:11:35.812 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:35.812 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:35.812 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:35.812 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:35.812 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:35.812 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:35.812 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:35.812 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:35.812 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:35.812 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:35.812 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:35.812 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:35.812 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:35.812 [Child] Cleaning up... 00:11:35.812 Asynchronous Event Request test 00:11:35.812 Attached to 0000:00:10.0 00:11:35.812 Attached to 0000:00:11.0 00:11:35.812 Attached to 0000:00:13.0 00:11:35.812 Attached to 0000:00:12.0 00:11:35.812 Reset controller to setup AER completions for this process 00:11:35.812 Registering asynchronous event callbacks... 00:11:35.812 Getting orig temperature thresholds of all controllers 00:11:35.812 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:35.812 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:35.812 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:35.812 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:35.812 Setting all controllers temperature threshold low to trigger AER 00:11:35.812 Waiting for all controllers temperature threshold to be set lower 00:11:35.812 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:35.812 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:35.812 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:35.812 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:35.812 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:35.812 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:35.812 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:35.812 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:35.812 Waiting for all controllers to trigger AER and reset threshold 00:11:35.812 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:35.812 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:35.812 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:35.812 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:35.812 Cleaning up... 00:11:36.070 00:11:36.070 real 0m0.511s 00:11:36.070 user 0m0.177s 00:11:36.070 sys 0m0.232s 00:11:36.070 15:07:13 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.070 15:07:13 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:11:36.070 ************************************ 00:11:36.070 END TEST nvme_multi_aen 00:11:36.070 ************************************ 00:11:36.071 15:07:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:36.071 15:07:13 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:36.071 15:07:13 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:36.071 15:07:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.071 15:07:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:36.071 ************************************ 00:11:36.071 START TEST nvme_startup 00:11:36.071 ************************************ 00:11:36.071 15:07:13 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:36.329 Initializing NVMe Controllers 00:11:36.329 Attached to 0000:00:10.0 00:11:36.329 Attached to 0000:00:11.0 00:11:36.329 Attached to 0000:00:13.0 00:11:36.329 Attached to 0000:00:12.0 00:11:36.329 Initialization complete. 00:11:36.329 Time used:158344.391 (us). 00:11:36.329 00:11:36.329 real 0m0.253s 00:11:36.329 user 0m0.090s 00:11:36.329 sys 0m0.119s 00:11:36.329 15:07:14 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.329 15:07:14 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:11:36.329 ************************************ 00:11:36.329 END TEST nvme_startup 00:11:36.329 ************************************ 00:11:36.329 15:07:14 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:36.329 15:07:14 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:36.329 15:07:14 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:36.329 15:07:14 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.329 15:07:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:36.329 ************************************ 00:11:36.329 START TEST nvme_multi_secondary 00:11:36.329 ************************************ 00:11:36.329 15:07:14 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:11:36.329 15:07:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=71063 00:11:36.329 15:07:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:36.329 15:07:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=71064 00:11:36.329 15:07:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:36.329 15:07:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:39.659 Initializing NVMe Controllers 00:11:39.659 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:39.659 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:39.659 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:39.659 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:39.659 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:39.659 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:39.659 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:39.659 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:39.659 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:39.659 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:39.659 Initialization complete. Launching workers. 00:11:39.659 ======================================================== 00:11:39.659 Latency(us) 00:11:39.659 Device Information : IOPS MiB/s Average min max 00:11:39.659 PCIE (0000:00:10.0) NSID 1 from core 1: 6098.63 23.82 2621.29 864.64 6852.90 00:11:39.659 PCIE (0000:00:11.0) NSID 1 from core 1: 6098.63 23.82 2623.00 894.24 6397.97 00:11:39.659 PCIE (0000:00:13.0) NSID 1 from core 1: 6098.63 23.82 2623.13 852.53 7922.19 00:11:39.659 PCIE (0000:00:12.0) NSID 1 from core 1: 6098.63 23.82 2623.30 871.16 7896.62 00:11:39.659 PCIE (0000:00:12.0) NSID 2 from core 1: 6098.63 23.82 2623.56 875.24 7783.69 00:11:39.659 PCIE (0000:00:12.0) NSID 3 from core 1: 6098.63 23.82 2623.62 895.22 7253.09 00:11:39.659 ======================================================== 00:11:39.659 Total : 36591.80 142.94 2622.98 852.53 7922.19 00:11:39.659 00:11:39.916 Initializing NVMe Controllers 00:11:39.916 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:39.916 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:39.916 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:39.916 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:39.916 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:39.916 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:39.916 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:39.916 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:39.916 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:39.916 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:39.916 Initialization complete. Launching workers. 00:11:39.916 ======================================================== 00:11:39.916 Latency(us) 00:11:39.916 Device Information : IOPS MiB/s Average min max 00:11:39.916 PCIE (0000:00:10.0) NSID 1 from core 2: 3140.03 12.27 5093.39 1312.70 13011.26 00:11:39.916 PCIE (0000:00:11.0) NSID 1 from core 2: 3140.03 12.27 5096.32 1359.78 13114.17 00:11:39.916 PCIE (0000:00:13.0) NSID 1 from core 2: 3140.03 12.27 5101.22 1274.29 16554.92 00:11:39.916 PCIE (0000:00:12.0) NSID 1 from core 2: 3140.03 12.27 5101.90 1373.58 12769.26 00:11:39.916 PCIE (0000:00:12.0) NSID 2 from core 2: 3140.03 12.27 5101.75 1378.59 13079.59 00:11:39.916 PCIE (0000:00:12.0) NSID 3 from core 2: 3140.03 12.27 5101.88 1352.85 16631.62 00:11:39.916 ======================================================== 00:11:39.916 Total : 18840.18 73.59 5099.41 1274.29 16631.62 00:11:39.916 00:11:39.916 15:07:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 71063 00:11:41.818 Initializing NVMe Controllers 00:11:41.818 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:41.818 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:41.818 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:41.818 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:41.818 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:41.818 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:41.818 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:41.818 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:41.818 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:41.818 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:41.818 Initialization complete. Launching workers. 00:11:41.818 ======================================================== 00:11:41.818 Latency(us) 00:11:41.818 Device Information : IOPS MiB/s Average min max 00:11:41.818 PCIE (0000:00:10.0) NSID 1 from core 0: 9355.69 36.55 1708.59 834.03 7341.11 00:11:41.818 PCIE (0000:00:11.0) NSID 1 from core 0: 9355.69 36.55 1709.72 836.22 8328.05 00:11:41.818 PCIE (0000:00:13.0) NSID 1 from core 0: 9355.69 36.55 1709.70 848.43 8255.55 00:11:41.818 PCIE (0000:00:12.0) NSID 1 from core 0: 9355.69 36.55 1709.68 849.71 7380.12 00:11:41.818 PCIE (0000:00:12.0) NSID 2 from core 0: 9355.69 36.55 1709.66 840.43 7202.08 00:11:41.818 PCIE (0000:00:12.0) NSID 3 from core 0: 9355.69 36.55 1709.64 859.02 6885.74 00:11:41.818 ======================================================== 00:11:41.818 Total : 56134.14 219.27 1709.50 834.03 8328.05 00:11:41.818 00:11:41.818 15:07:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 71064 00:11:41.818 15:07:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=71133 00:11:41.818 15:07:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:41.818 15:07:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:41.818 15:07:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=71134 00:11:41.818 15:07:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:45.118 Initializing NVMe Controllers 00:11:45.118 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:45.118 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:45.118 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:45.118 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:45.118 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:45.118 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:45.118 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:45.118 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:45.118 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:45.118 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:45.118 Initialization complete. Launching workers. 00:11:45.118 ======================================================== 00:11:45.118 Latency(us) 00:11:45.118 Device Information : IOPS MiB/s Average min max 00:11:45.118 PCIE (0000:00:10.0) NSID 1 from core 1: 6115.12 23.89 2614.28 846.96 7117.12 00:11:45.118 PCIE (0000:00:11.0) NSID 1 from core 1: 6115.12 23.89 2616.03 846.40 7332.78 00:11:45.118 PCIE (0000:00:13.0) NSID 1 from core 1: 6115.12 23.89 2616.06 855.39 8259.00 00:11:45.118 PCIE (0000:00:12.0) NSID 1 from core 1: 6115.12 23.89 2616.14 864.50 7724.36 00:11:45.118 PCIE (0000:00:12.0) NSID 2 from core 1: 6115.12 23.89 2616.28 871.26 7147.76 00:11:45.118 PCIE (0000:00:12.0) NSID 3 from core 1: 6120.46 23.91 2614.22 851.11 7307.34 00:11:45.118 ======================================================== 00:11:45.118 Total : 36696.07 143.34 2615.50 846.40 8259.00 00:11:45.118 00:11:45.118 Initializing NVMe Controllers 00:11:45.118 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:45.118 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:45.118 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:45.118 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:45.118 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:45.118 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:45.118 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:45.118 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:45.118 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:45.118 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:45.118 Initialization complete. Launching workers. 00:11:45.118 ======================================================== 00:11:45.118 Latency(us) 00:11:45.118 Device Information : IOPS MiB/s Average min max 00:11:45.118 PCIE (0000:00:10.0) NSID 1 from core 0: 6431.98 25.12 2485.38 844.20 6032.50 00:11:45.118 PCIE (0000:00:11.0) NSID 1 from core 0: 6431.98 25.12 2486.93 868.27 5282.38 00:11:45.118 PCIE (0000:00:13.0) NSID 1 from core 0: 6431.98 25.12 2486.94 886.68 5562.21 00:11:45.118 PCIE (0000:00:12.0) NSID 1 from core 0: 6431.98 25.12 2486.87 878.33 5688.31 00:11:45.118 PCIE (0000:00:12.0) NSID 2 from core 0: 6431.98 25.12 2486.81 878.16 6061.79 00:11:45.118 PCIE (0000:00:12.0) NSID 3 from core 0: 6431.98 25.12 2486.75 881.42 6074.72 00:11:45.118 ======================================================== 00:11:45.118 Total : 38591.88 150.75 2486.61 844.20 6074.72 00:11:45.118 00:11:47.031 Initializing NVMe Controllers 00:11:47.031 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:47.031 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:47.031 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:47.031 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:47.031 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:47.031 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:47.031 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:47.031 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:47.031 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:47.031 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:47.031 Initialization complete. Launching workers. 00:11:47.031 ======================================================== 00:11:47.031 Latency(us) 00:11:47.031 Device Information : IOPS MiB/s Average min max 00:11:47.031 PCIE (0000:00:10.0) NSID 1 from core 2: 3359.04 13.12 4760.76 980.74 13264.79 00:11:47.031 PCIE (0000:00:11.0) NSID 1 from core 2: 3359.04 13.12 4762.02 983.85 12869.64 00:11:47.031 PCIE (0000:00:13.0) NSID 1 from core 2: 3359.04 13.12 4762.72 998.60 13250.39 00:11:47.031 PCIE (0000:00:12.0) NSID 1 from core 2: 3359.04 13.12 4762.95 1000.45 16738.37 00:11:47.031 PCIE (0000:00:12.0) NSID 2 from core 2: 3359.04 13.12 4762.93 987.93 16660.60 00:11:47.031 PCIE (0000:00:12.0) NSID 3 from core 2: 3359.04 13.12 4762.90 975.94 16253.49 00:11:47.031 ======================================================== 00:11:47.031 Total : 20154.23 78.73 4762.38 975.94 16738.37 00:11:47.031 00:11:47.031 15:07:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 71133 00:11:47.031 15:07:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 71134 00:11:47.031 00:11:47.031 real 0m10.804s 00:11:47.031 user 0m18.482s 00:11:47.031 sys 0m0.832s 00:11:47.031 15:07:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:47.031 15:07:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:47.031 ************************************ 00:11:47.031 END TEST nvme_multi_secondary 00:11:47.031 ************************************ 00:11:47.290 15:07:25 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:47.290 15:07:25 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:47.290 15:07:25 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:47.290 15:07:25 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/70080 ]] 00:11:47.290 15:07:25 nvme -- common/autotest_common.sh@1088 -- # kill 70080 00:11:47.290 15:07:25 nvme -- common/autotest_common.sh@1089 -- # wait 70080 00:11:47.290 [2024-07-15 15:07:25.177900] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.290 [2024-07-15 15:07:25.178061] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.290 [2024-07-15 15:07:25.178107] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.290 [2024-07-15 15:07:25.178147] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.290 [2024-07-15 15:07:25.185085] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.290 [2024-07-15 15:07:25.185183] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.290 [2024-07-15 15:07:25.185222] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.290 [2024-07-15 15:07:25.185313] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.290 [2024-07-15 15:07:25.190767] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.291 [2024-07-15 15:07:25.190839] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.291 [2024-07-15 15:07:25.190865] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.291 [2024-07-15 15:07:25.190890] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.291 [2024-07-15 15:07:25.195747] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.291 [2024-07-15 15:07:25.195811] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.291 [2024-07-15 15:07:25.195834] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.291 [2024-07-15 15:07:25.195859] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 71006) is not found. Dropping the request. 00:11:47.552 [2024-07-15 15:07:25.453557] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:11:47.552 15:07:25 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:11:47.552 15:07:25 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:11:47.552 15:07:25 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:47.552 15:07:25 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:47.552 15:07:25 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.552 15:07:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:47.552 ************************************ 00:11:47.552 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:47.552 ************************************ 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:47.552 * Looking for test storage... 00:11:47.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:47.552 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:47.812 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:47.812 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:47.812 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:11:47.812 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:47.812 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:47.812 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=71288 00:11:47.812 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:47.812 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:47.812 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 71288 00:11:47.812 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 71288 ']' 00:11:47.812 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.812 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.812 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.813 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.813 15:07:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:47.813 [2024-07-15 15:07:25.829318] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:11:47.813 [2024-07-15 15:07:25.829466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71288 ] 00:11:48.071 [2024-07-15 15:07:26.005597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.331 [2024-07-15 15:07:26.250242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.331 [2024-07-15 15:07:26.250477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.331 [2024-07-15 15:07:26.250393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.331 [2024-07-15 15:07:26.250506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:49.269 nvme0n1 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_iQmNs.txt 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:49.269 true 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721056047 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=71322 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:49.269 15:07:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:51.802 [2024-07-15 15:07:29.320879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:11:51.802 [2024-07-15 15:07:29.321226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:51.802 [2024-07-15 15:07:29.321261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:51.802 [2024-07-15 15:07:29.321279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.802 [2024-07-15 15:07:29.323091] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.802 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 71322 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 71322 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 71322 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_iQmNs.txt 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_iQmNs.txt 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 71288 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 71288 ']' 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 71288 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71288 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:51.802 killing process with pid 71288 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71288' 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 71288 00:11:51.802 15:07:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 71288 00:11:54.349 15:07:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:54.349 15:07:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:54.349 00:11:54.349 real 0m6.644s 00:11:54.349 user 0m22.784s 00:11:54.349 sys 0m0.734s 00:11:54.349 15:07:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.349 15:07:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:54.349 ************************************ 00:11:54.349 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:54.349 ************************************ 00:11:54.349 15:07:32 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:54.349 15:07:32 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:54.349 15:07:32 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:54.349 15:07:32 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:54.349 15:07:32 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.349 15:07:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:54.349 ************************************ 00:11:54.349 START TEST nvme_fio 00:11:54.349 ************************************ 00:11:54.349 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:11:54.349 15:07:32 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:54.349 15:07:32 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:54.349 15:07:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:54.349 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:54.349 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:11:54.349 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:54.349 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:54.349 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:54.349 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:54.349 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:54.349 15:07:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:54.349 15:07:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:54.349 15:07:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:54.349 15:07:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:54.349 15:07:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:54.613 15:07:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:54.613 15:07:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:54.871 15:07:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:54.871 15:07:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:54.871 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:54.871 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:54.871 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:54.871 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:54.871 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:54.871 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:54.871 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:54.871 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:54.871 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:54.871 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:54.871 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:54.872 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:54.872 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:54.872 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:54.872 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:54.872 15:07:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:55.130 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:55.130 fio-3.35 00:11:55.130 Starting 1 thread 00:12:01.745 00:12:01.745 test: (groupid=0, jobs=1): err= 0: pid=71471: Mon Jul 15 15:07:38 2024 00:12:01.745 read: IOPS=23.3k, BW=91.1MiB/s (95.5MB/s)(182MiB/2001msec) 00:12:01.745 slat (nsec): min=4425, max=51907, avg=5493.55, stdev=1158.59 00:12:01.745 clat (usec): min=241, max=11776, avg=2736.30, stdev=282.45 00:12:01.745 lat (usec): min=247, max=11828, avg=2741.80, stdev=282.87 00:12:01.745 clat percentiles (usec): 00:12:01.745 | 1.00th=[ 2507], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2638], 00:12:01.745 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:12:01.745 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 2933], 00:12:01.745 | 99.00th=[ 3359], 99.50th=[ 4178], 99.90th=[ 6456], 99.95th=[ 8225], 00:12:01.745 | 99.99th=[11469] 00:12:01.745 bw ( KiB/s): min=91416, max=95360, per=100.00%, avg=93485.33, stdev=1979.19, samples=3 00:12:01.745 iops : min=22854, max=23840, avg=23371.33, stdev=494.80, samples=3 00:12:01.745 write: IOPS=23.2k, BW=90.4MiB/s (94.8MB/s)(181MiB/2001msec); 0 zone resets 00:12:01.745 slat (nsec): min=4597, max=41313, avg=5608.63, stdev=1160.98 00:12:01.745 clat (usec): min=210, max=11586, avg=2741.12, stdev=290.00 00:12:01.745 lat (usec): min=215, max=11610, avg=2746.72, stdev=290.45 00:12:01.745 clat percentiles (usec): 00:12:01.745 | 1.00th=[ 2507], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2638], 00:12:01.745 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:12:01.745 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 2933], 00:12:01.745 | 99.00th=[ 3392], 99.50th=[ 4293], 99.90th=[ 6587], 99.95th=[ 8717], 00:12:01.745 | 99.99th=[11207] 00:12:01.745 bw ( KiB/s): min=91064, max=95296, per=100.00%, avg=93586.67, stdev=2230.15, samples=3 00:12:01.745 iops : min=22766, max=23824, avg=23396.67, stdev=557.54, samples=3 00:12:01.745 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:01.745 lat (msec) : 2=0.14%, 4=99.21%, 10=0.58%, 20=0.03% 00:12:01.745 cpu : usr=99.45%, sys=0.00%, ctx=3, majf=0, minf=605 00:12:01.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:01.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:01.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:01.745 issued rwts: total=46649,46333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:01.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:01.745 00:12:01.745 Run status group 0 (all jobs): 00:12:01.745 READ: bw=91.1MiB/s (95.5MB/s), 91.1MiB/s-91.1MiB/s (95.5MB/s-95.5MB/s), io=182MiB (191MB), run=2001-2001msec 00:12:01.745 WRITE: bw=90.4MiB/s (94.8MB/s), 90.4MiB/s-90.4MiB/s (94.8MB/s-94.8MB/s), io=181MiB (190MB), run=2001-2001msec 00:12:01.745 ----------------------------------------------------- 00:12:01.745 Suppressions used: 00:12:01.745 count bytes template 00:12:01.745 1 32 /usr/src/fio/parse.c 00:12:01.745 1 8 libtcmalloc_minimal.so 00:12:01.745 ----------------------------------------------------- 00:12:01.745 00:12:01.745 15:07:39 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:01.745 15:07:39 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:01.745 15:07:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:01.745 15:07:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:01.745 15:07:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:01.745 15:07:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:01.745 15:07:39 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:01.745 15:07:39 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:01.745 15:07:39 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:01.745 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:01.745 fio-3.35 00:12:01.745 Starting 1 thread 00:12:09.862 00:12:09.862 test: (groupid=0, jobs=1): err= 0: pid=71565: Mon Jul 15 15:07:46 2024 00:12:09.862 read: IOPS=23.4k, BW=91.5MiB/s (96.0MB/s)(183MiB/2001msec) 00:12:09.862 slat (nsec): min=4644, max=82541, avg=5420.23, stdev=1142.37 00:12:09.862 clat (usec): min=220, max=11813, avg=2722.42, stdev=284.51 00:12:09.862 lat (usec): min=225, max=11891, avg=2727.84, stdev=285.02 00:12:09.862 clat percentiles (usec): 00:12:09.862 | 1.00th=[ 2507], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2638], 00:12:09.862 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:12:09.862 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2802], 95.00th=[ 2900], 00:12:09.862 | 99.00th=[ 3458], 99.50th=[ 3949], 99.90th=[ 6325], 99.95th=[ 8455], 00:12:09.862 | 99.99th=[11469] 00:12:09.862 bw ( KiB/s): min=87984, max=95768, per=99.12%, avg=92885.33, stdev=4266.61, samples=3 00:12:09.862 iops : min=21996, max=23942, avg=23221.33, stdev=1066.65, samples=3 00:12:09.862 write: IOPS=23.3k, BW=90.9MiB/s (95.3MB/s)(182MiB/2001msec); 0 zone resets 00:12:09.862 slat (nsec): min=4747, max=89012, avg=5553.81, stdev=1062.39 00:12:09.862 clat (usec): min=252, max=11567, avg=2727.29, stdev=291.98 00:12:09.862 lat (usec): min=257, max=11596, avg=2732.84, stdev=292.43 00:12:09.862 clat percentiles (usec): 00:12:09.862 | 1.00th=[ 2540], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2638], 00:12:09.862 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:12:09.862 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2802], 95.00th=[ 2900], 00:12:09.862 | 99.00th=[ 3458], 99.50th=[ 4424], 99.90th=[ 6456], 99.95th=[ 8848], 00:12:09.862 | 99.99th=[11076] 00:12:09.862 bw ( KiB/s): min=87480, max=96632, per=99.88%, avg=93008.00, stdev=4864.02, samples=3 00:12:09.862 iops : min=21870, max=24158, avg=23252.00, stdev=1216.00, samples=3 00:12:09.862 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:09.862 lat (msec) : 2=0.05%, 4=99.38%, 10=0.50%, 20=0.03% 00:12:09.862 cpu : usr=99.25%, sys=0.15%, ctx=30, majf=0, minf=605 00:12:09.862 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:09.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.862 issued rwts: total=46876,46581,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.862 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.862 00:12:09.862 Run status group 0 (all jobs): 00:12:09.862 READ: bw=91.5MiB/s (96.0MB/s), 91.5MiB/s-91.5MiB/s (96.0MB/s-96.0MB/s), io=183MiB (192MB), run=2001-2001msec 00:12:09.862 WRITE: bw=90.9MiB/s (95.3MB/s), 90.9MiB/s-90.9MiB/s (95.3MB/s-95.3MB/s), io=182MiB (191MB), run=2001-2001msec 00:12:09.862 ----------------------------------------------------- 00:12:09.862 Suppressions used: 00:12:09.862 count bytes template 00:12:09.862 1 32 /usr/src/fio/parse.c 00:12:09.862 1 8 libtcmalloc_minimal.so 00:12:09.862 ----------------------------------------------------- 00:12:09.862 00:12:09.862 15:07:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:09.862 15:07:47 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:09.862 15:07:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:09.862 15:07:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:09.862 15:07:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:09.862 15:07:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:09.862 15:07:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:09.863 15:07:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:09.863 15:07:47 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:09.863 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:09.863 fio-3.35 00:12:09.863 Starting 1 thread 00:12:16.452 00:12:16.452 test: (groupid=0, jobs=1): err= 0: pid=71630: Mon Jul 15 15:07:54 2024 00:12:16.452 read: IOPS=23.0k, BW=89.8MiB/s (94.1MB/s)(180MiB/2001msec) 00:12:16.452 slat (usec): min=4, max=227, avg= 5.53, stdev= 1.82 00:12:16.452 clat (usec): min=258, max=11319, avg=2776.43, stdev=455.48 00:12:16.452 lat (usec): min=263, max=11431, avg=2781.96, stdev=456.40 00:12:16.452 clat percentiles (usec): 00:12:16.452 | 1.00th=[ 2540], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2638], 00:12:16.452 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:12:16.452 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 3032], 00:12:16.452 | 99.00th=[ 4752], 99.50th=[ 6521], 99.90th=[ 7898], 99.95th=[ 8094], 00:12:16.452 | 99.99th=[10945] 00:12:16.452 bw ( KiB/s): min=86744, max=93464, per=98.11%, avg=90165.33, stdev=3361.68, samples=3 00:12:16.452 iops : min=21686, max=23366, avg=22541.33, stdev=840.42, samples=3 00:12:16.452 write: IOPS=22.8k, BW=89.2MiB/s (93.6MB/s)(179MiB/2001msec); 0 zone resets 00:12:16.452 slat (nsec): min=4755, max=43179, avg=5654.79, stdev=1310.53 00:12:16.452 clat (usec): min=220, max=11089, avg=2778.28, stdev=440.73 00:12:16.452 lat (usec): min=226, max=11121, avg=2783.93, stdev=441.58 00:12:16.452 clat percentiles (usec): 00:12:16.452 | 1.00th=[ 2540], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2638], 00:12:16.452 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:12:16.452 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 3032], 00:12:16.452 | 99.00th=[ 4686], 99.50th=[ 6128], 99.90th=[ 7898], 99.95th=[ 8455], 00:12:16.452 | 99.99th=[10552] 00:12:16.452 bw ( KiB/s): min=86328, max=92928, per=98.94%, avg=90389.33, stdev=3553.71, samples=3 00:12:16.452 iops : min=21582, max=23232, avg=22597.33, stdev=888.43, samples=3 00:12:16.452 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:16.452 lat (msec) : 2=0.16%, 4=98.17%, 10=1.60%, 20=0.02% 00:12:16.452 cpu : usr=99.25%, sys=0.20%, ctx=2, majf=0, minf=605 00:12:16.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:16.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:16.452 issued rwts: total=45976,45703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:16.452 00:12:16.452 Run status group 0 (all jobs): 00:12:16.452 READ: bw=89.8MiB/s (94.1MB/s), 89.8MiB/s-89.8MiB/s (94.1MB/s-94.1MB/s), io=180MiB (188MB), run=2001-2001msec 00:12:16.452 WRITE: bw=89.2MiB/s (93.6MB/s), 89.2MiB/s-89.2MiB/s (93.6MB/s-93.6MB/s), io=179MiB (187MB), run=2001-2001msec 00:12:16.709 ----------------------------------------------------- 00:12:16.709 Suppressions used: 00:12:16.709 count bytes template 00:12:16.709 1 32 /usr/src/fio/parse.c 00:12:16.709 1 8 libtcmalloc_minimal.so 00:12:16.709 ----------------------------------------------------- 00:12:16.709 00:12:16.709 15:07:54 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:16.709 15:07:54 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:16.709 15:07:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:16.709 15:07:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:16.967 15:07:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:16.967 15:07:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:17.225 15:07:55 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:17.225 15:07:55 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:17.225 15:07:55 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:17.482 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:17.482 fio-3.35 00:12:17.482 Starting 1 thread 00:12:29.692 00:12:29.692 test: (groupid=0, jobs=1): err= 0: pid=71725: Mon Jul 15 15:08:05 2024 00:12:29.692 read: IOPS=22.2k, BW=86.6MiB/s (90.8MB/s)(173MiB/2001msec) 00:12:29.692 slat (nsec): min=4652, max=73610, avg=5719.43, stdev=1569.54 00:12:29.692 clat (usec): min=213, max=11666, avg=2879.31, stdev=620.40 00:12:29.693 lat (usec): min=218, max=11671, avg=2885.03, stdev=621.42 00:12:29.693 clat percentiles (usec): 00:12:29.693 | 1.00th=[ 2540], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2671], 00:12:29.693 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:12:29.693 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 3326], 95.00th=[ 3654], 00:12:29.693 | 99.00th=[ 6194], 99.50th=[ 7635], 99.90th=[ 9634], 99.95th=[10552], 00:12:29.693 | 99.99th=[11469] 00:12:29.693 bw ( KiB/s): min=83392, max=94000, per=100.00%, avg=88944.00, stdev=5321.37, samples=3 00:12:29.693 iops : min=20848, max=23502, avg=22236.00, stdev=1331.20, samples=3 00:12:29.693 write: IOPS=22.0k, BW=86.0MiB/s (90.2MB/s)(172MiB/2001msec); 0 zone resets 00:12:29.693 slat (nsec): min=4734, max=62658, avg=5850.62, stdev=1624.98 00:12:29.693 clat (usec): min=249, max=11722, avg=2882.38, stdev=624.12 00:12:29.693 lat (usec): min=255, max=11728, avg=2888.23, stdev=625.19 00:12:29.693 clat percentiles (usec): 00:12:29.693 | 1.00th=[ 2573], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2671], 00:12:29.693 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2769], 00:12:29.693 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 3294], 95.00th=[ 3654], 00:12:29.693 | 99.00th=[ 6194], 99.50th=[ 7570], 99.90th=[ 9765], 99.95th=[10421], 00:12:29.693 | 99.99th=[11469] 00:12:29.693 bw ( KiB/s): min=83384, max=93544, per=100.00%, avg=89120.00, stdev=5205.52, samples=3 00:12:29.693 iops : min=20846, max=23386, avg=22280.00, stdev=1301.38, samples=3 00:12:29.693 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:29.693 lat (msec) : 2=0.24%, 4=97.24%, 10=2.40%, 20=0.08% 00:12:29.693 cpu : usr=99.30%, sys=0.05%, ctx=2, majf=0, minf=604 00:12:29.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:29.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:29.693 issued rwts: total=44373,44062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.693 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:29.693 00:12:29.693 Run status group 0 (all jobs): 00:12:29.693 READ: bw=86.6MiB/s (90.8MB/s), 86.6MiB/s-86.6MiB/s (90.8MB/s-90.8MB/s), io=173MiB (182MB), run=2001-2001msec 00:12:29.693 WRITE: bw=86.0MiB/s (90.2MB/s), 86.0MiB/s-86.0MiB/s (90.2MB/s-90.2MB/s), io=172MiB (180MB), run=2001-2001msec 00:12:29.693 ----------------------------------------------------- 00:12:29.693 Suppressions used: 00:12:29.693 count bytes template 00:12:29.693 1 32 /usr/src/fio/parse.c 00:12:29.693 1 8 libtcmalloc_minimal.so 00:12:29.693 ----------------------------------------------------- 00:12:29.693 00:12:29.693 15:08:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:29.693 15:08:05 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:12:29.693 00:12:29.693 real 0m33.767s 00:12:29.693 user 0m17.207s 00:12:29.693 sys 0m31.779s 00:12:29.693 15:08:05 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.693 15:08:05 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:12:29.693 ************************************ 00:12:29.693 END TEST nvme_fio 00:12:29.693 ************************************ 00:12:29.693 15:08:06 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:29.693 00:12:29.693 real 1m47.845s 00:12:29.693 user 3m50.679s 00:12:29.693 sys 0m42.625s 00:12:29.693 15:08:06 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.693 15:08:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:29.693 ************************************ 00:12:29.693 END TEST nvme 00:12:29.693 ************************************ 00:12:29.693 15:08:06 -- common/autotest_common.sh@1142 -- # return 0 00:12:29.693 15:08:06 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:12:29.693 15:08:06 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:29.693 15:08:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:29.693 15:08:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.693 15:08:06 -- common/autotest_common.sh@10 -- # set +x 00:12:29.693 ************************************ 00:12:29.693 START TEST nvme_scc 00:12:29.693 ************************************ 00:12:29.693 15:08:06 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:29.693 * Looking for test storage... 00:12:29.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:29.693 15:08:06 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:29.693 15:08:06 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:29.693 15:08:06 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:29.693 15:08:06 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:29.693 15:08:06 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:29.693 15:08:06 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.693 15:08:06 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.693 15:08:06 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.693 15:08:06 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.693 15:08:06 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.693 15:08:06 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.693 15:08:06 nvme_scc -- paths/export.sh@5 -- # export PATH 00:12:29.693 15:08:06 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.693 15:08:06 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:12:29.693 15:08:06 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:29.693 15:08:06 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:12:29.693 15:08:06 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:29.693 15:08:06 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:12:29.693 15:08:06 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:29.693 15:08:06 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:29.693 15:08:06 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:29.693 15:08:06 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:12:29.693 15:08:06 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:29.693 15:08:06 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:12:29.693 15:08:06 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:29.693 15:08:06 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:29.693 15:08:06 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:29.693 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:29.693 Waiting for block devices as requested 00:12:29.693 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:29.693 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:29.693 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:29.693 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:34.981 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:34.981 15:08:12 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:34.981 15:08:12 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:34.981 15:08:12 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:34.981 15:08:12 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:34.981 15:08:12 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.981 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:34.982 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:34.983 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:34.984 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.985 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:34.986 15:08:12 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:34.986 15:08:12 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:34.986 15:08:12 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:34.986 15:08:12 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.986 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:34.987 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:34.988 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.989 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:34.990 15:08:12 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:34.990 15:08:12 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:34.990 15:08:12 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:34.990 15:08:12 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.990 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:34.991 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:34.992 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.993 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.994 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.995 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.996 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:34.997 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:34.998 15:08:12 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:34.998 15:08:12 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:34.998 15:08:12 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:34.998 15:08:12 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.998 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:34.999 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:35.000 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:35.001 15:08:12 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:12:35.001 15:08:12 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:12:35.001 15:08:12 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:35.001 15:08:12 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:12:35.001 15:08:12 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:35.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:35.823 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.823 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.823 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.823 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:36.081 15:08:14 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:36.081 15:08:14 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:36.081 15:08:14 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.081 15:08:14 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:36.081 ************************************ 00:12:36.081 START TEST nvme_simple_copy 00:12:36.081 ************************************ 00:12:36.081 15:08:14 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:36.339 Initializing NVMe Controllers 00:12:36.339 Attaching to 0000:00:10.0 00:12:36.339 Controller supports SCC. Attached to 0000:00:10.0 00:12:36.339 Namespace ID: 1 size: 6GB 00:12:36.339 Initialization complete. 00:12:36.339 00:12:36.339 Controller QEMU NVMe Ctrl (12340 ) 00:12:36.339 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:12:36.339 Namespace Block Size:4096 00:12:36.339 Writing LBAs 0 to 63 with Random Data 00:12:36.339 Copied LBAs from 0 - 63 to the Destination LBA 256 00:12:36.339 LBAs matching Written Data: 64 00:12:36.339 00:12:36.339 real 0m0.285s 00:12:36.339 user 0m0.111s 00:12:36.339 sys 0m0.074s 00:12:36.339 15:08:14 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.339 15:08:14 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:12:36.339 ************************************ 00:12:36.339 END TEST nvme_simple_copy 00:12:36.339 ************************************ 00:12:36.339 15:08:14 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:12:36.339 00:12:36.339 real 0m8.318s 00:12:36.339 user 0m1.295s 00:12:36.339 sys 0m2.069s 00:12:36.339 15:08:14 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.339 15:08:14 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:36.339 ************************************ 00:12:36.339 END TEST nvme_scc 00:12:36.339 ************************************ 00:12:36.339 15:08:14 -- common/autotest_common.sh@1142 -- # return 0 00:12:36.339 15:08:14 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:12:36.596 15:08:14 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:12:36.596 15:08:14 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:12:36.596 15:08:14 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:12:36.596 15:08:14 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:12:36.596 15:08:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:36.596 15:08:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.596 15:08:14 -- common/autotest_common.sh@10 -- # set +x 00:12:36.596 ************************************ 00:12:36.596 START TEST nvme_fdp 00:12:36.596 ************************************ 00:12:36.596 15:08:14 nvme_fdp -- common/autotest_common.sh@1123 -- # test/nvme/nvme_fdp.sh 00:12:36.596 * Looking for test storage... 00:12:36.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:36.596 15:08:14 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:36.596 15:08:14 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:36.596 15:08:14 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:36.596 15:08:14 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:36.596 15:08:14 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:36.596 15:08:14 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.596 15:08:14 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.596 15:08:14 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.596 15:08:14 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.596 15:08:14 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.596 15:08:14 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.596 15:08:14 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:12:36.596 15:08:14 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.596 15:08:14 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:12:36.596 15:08:14 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:36.596 15:08:14 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:12:36.596 15:08:14 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:36.596 15:08:14 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:12:36.597 15:08:14 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:36.597 15:08:14 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:36.597 15:08:14 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:36.597 15:08:14 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:12:36.597 15:08:14 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:36.597 15:08:14 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:37.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:37.417 Waiting for block devices as requested 00:12:37.417 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:37.417 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:37.676 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:37.676 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:42.957 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:42.957 15:08:20 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:42.957 15:08:20 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:42.957 15:08:20 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:42.957 15:08:20 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:42.957 15:08:20 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.957 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:42.958 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.959 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.960 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:42.961 15:08:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:42.962 15:08:20 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:42.962 15:08:20 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:42.962 15:08:20 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:42.962 15:08:20 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:42.962 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.963 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:42.964 15:08:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.965 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:42.966 15:08:20 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:42.966 15:08:20 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:42.966 15:08:20 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:42.966 15:08:20 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.966 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:42.967 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.968 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:42.969 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:42.970 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.971 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.972 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:42.973 15:08:21 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:42.973 15:08:21 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:42.973 15:08:21 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:42.973 15:08:21 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.973 15:08:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:43.235 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.236 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:43.237 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:43.238 15:08:21 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:12:43.238 15:08:21 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:12:43.238 15:08:21 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:12:43.238 15:08:21 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:12:43.238 15:08:21 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:43.806 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:44.373 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:44.373 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:44.373 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:44.632 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:44.632 15:08:22 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:44.632 15:08:22 nvme_fdp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:44.632 15:08:22 nvme_fdp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:44.632 15:08:22 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:44.632 ************************************ 00:12:44.632 START TEST nvme_flexible_data_placement 00:12:44.632 ************************************ 00:12:44.632 15:08:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:44.921 Initializing NVMe Controllers 00:12:44.921 Attaching to 0000:00:13.0 00:12:44.921 Controller supports FDP Attached to 0000:00:13.0 00:12:44.921 Namespace ID: 1 Endurance Group ID: 1 00:12:44.921 Initialization complete. 00:12:44.921 00:12:44.921 ================================== 00:12:44.921 == FDP tests for Namespace: #01 == 00:12:44.921 ================================== 00:12:44.921 00:12:44.921 Get Feature: FDP: 00:12:44.921 ================= 00:12:44.921 Enabled: Yes 00:12:44.921 FDP configuration Index: 0 00:12:44.921 00:12:44.921 FDP configurations log page 00:12:44.921 =========================== 00:12:44.921 Number of FDP configurations: 1 00:12:44.921 Version: 0 00:12:44.921 Size: 112 00:12:44.921 FDP Configuration Descriptor: 0 00:12:44.921 Descriptor Size: 96 00:12:44.921 Reclaim Group Identifier format: 2 00:12:44.921 FDP Volatile Write Cache: Not Present 00:12:44.921 FDP Configuration: Valid 00:12:44.921 Vendor Specific Size: 0 00:12:44.921 Number of Reclaim Groups: 2 00:12:44.921 Number of Recalim Unit Handles: 8 00:12:44.921 Max Placement Identifiers: 128 00:12:44.921 Number of Namespaces Suppprted: 256 00:12:44.921 Reclaim unit Nominal Size: 6000000 bytes 00:12:44.921 Estimated Reclaim Unit Time Limit: Not Reported 00:12:44.921 RUH Desc #000: RUH Type: Initially Isolated 00:12:44.921 RUH Desc #001: RUH Type: Initially Isolated 00:12:44.921 RUH Desc #002: RUH Type: Initially Isolated 00:12:44.921 RUH Desc #003: RUH Type: Initially Isolated 00:12:44.921 RUH Desc #004: RUH Type: Initially Isolated 00:12:44.921 RUH Desc #005: RUH Type: Initially Isolated 00:12:44.921 RUH Desc #006: RUH Type: Initially Isolated 00:12:44.921 RUH Desc #007: RUH Type: Initially Isolated 00:12:44.921 00:12:44.921 FDP reclaim unit handle usage log page 00:12:44.921 ====================================== 00:12:44.921 Number of Reclaim Unit Handles: 8 00:12:44.921 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:44.921 RUH Usage Desc #001: RUH Attributes: Unused 00:12:44.921 RUH Usage Desc #002: RUH Attributes: Unused 00:12:44.921 RUH Usage Desc #003: RUH Attributes: Unused 00:12:44.921 RUH Usage Desc #004: RUH Attributes: Unused 00:12:44.921 RUH Usage Desc #005: RUH Attributes: Unused 00:12:44.921 RUH Usage Desc #006: RUH Attributes: Unused 00:12:44.921 RUH Usage Desc #007: RUH Attributes: Unused 00:12:44.921 00:12:44.921 FDP statistics log page 00:12:44.921 ======================= 00:12:44.921 Host bytes with metadata written: 970428416 00:12:44.921 Media bytes with metadata written: 970543104 00:12:44.921 Media bytes erased: 0 00:12:44.921 00:12:44.921 FDP Reclaim unit handle status 00:12:44.921 ============================== 00:12:44.921 Number of RUHS descriptors: 2 00:12:44.921 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002287 00:12:44.921 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:44.921 00:12:44.921 FDP write on placement id: 0 success 00:12:44.921 00:12:44.921 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:44.921 00:12:44.921 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:44.921 00:12:44.921 Get Feature: FDP Events for Placement handle: #0 00:12:44.921 ======================== 00:12:44.921 Number of FDP Events: 6 00:12:44.921 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:44.921 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:44.921 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:44.921 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:44.921 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:44.921 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:44.921 00:12:44.921 FDP events log page 00:12:44.921 =================== 00:12:44.921 Number of FDP events: 1 00:12:44.921 FDP Event #0: 00:12:44.921 Event Type: RU Not Written to Capacity 00:12:44.921 Placement Identifier: Valid 00:12:44.921 NSID: Valid 00:12:44.921 Location: Valid 00:12:44.921 Placement Identifier: 0 00:12:44.921 Event Timestamp: 8 00:12:44.921 Namespace Identifier: 1 00:12:44.921 Reclaim Group Identifier: 0 00:12:44.921 Reclaim Unit Handle Identifier: 0 00:12:44.921 00:12:44.921 FDP test passed 00:12:44.921 00:12:44.921 real 0m0.269s 00:12:44.921 ************************************ 00:12:44.921 END TEST nvme_flexible_data_placement 00:12:44.921 ************************************ 00:12:44.921 user 0m0.080s 00:12:44.921 sys 0m0.087s 00:12:44.921 15:08:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:44.921 15:08:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:12:44.921 15:08:22 nvme_fdp -- common/autotest_common.sh@1142 -- # return 0 00:12:44.921 ************************************ 00:12:44.921 END TEST nvme_fdp 00:12:44.921 ************************************ 00:12:44.921 00:12:44.921 real 0m8.509s 00:12:44.921 user 0m1.305s 00:12:44.921 sys 0m2.237s 00:12:44.921 15:08:22 nvme_fdp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:44.921 15:08:22 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:44.921 15:08:23 -- common/autotest_common.sh@1142 -- # return 0 00:12:44.921 15:08:23 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:12:44.921 15:08:23 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:44.921 15:08:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:44.921 15:08:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:44.921 15:08:23 -- common/autotest_common.sh@10 -- # set +x 00:12:45.181 ************************************ 00:12:45.181 START TEST nvme_rpc 00:12:45.181 ************************************ 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:45.181 * Looking for test storage... 00:12:45.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:45.181 15:08:23 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:45.181 15:08:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:12:45.181 15:08:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:12:45.181 15:08:23 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=73145 00:12:45.181 15:08:23 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:45.181 15:08:23 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:45.181 15:08:23 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 73145 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 73145 ']' 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:45.181 15:08:23 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.441 [2024-07-15 15:08:23.374670] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:12:45.441 [2024-07-15 15:08:23.374944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73145 ] 00:12:45.441 [2024-07-15 15:08:23.540226] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:45.701 [2024-07-15 15:08:23.802604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.701 [2024-07-15 15:08:23.802639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.640 15:08:24 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.640 15:08:24 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:46.640 15:08:24 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:12:46.899 Nvme0n1 00:12:47.158 15:08:25 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:47.158 15:08:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:47.158 request: 00:12:47.158 { 00:12:47.158 "bdev_name": "Nvme0n1", 00:12:47.158 "filename": "non_existing_file", 00:12:47.158 "method": "bdev_nvme_apply_firmware", 00:12:47.158 "req_id": 1 00:12:47.158 } 00:12:47.158 Got JSON-RPC error response 00:12:47.158 response: 00:12:47.158 { 00:12:47.158 "code": -32603, 00:12:47.158 "message": "open file failed." 00:12:47.158 } 00:12:47.158 15:08:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:47.158 15:08:25 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:47.158 15:08:25 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:47.418 15:08:25 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:47.418 15:08:25 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 73145 00:12:47.418 15:08:25 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 73145 ']' 00:12:47.418 15:08:25 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 73145 00:12:47.418 15:08:25 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:12:47.418 15:08:25 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:47.418 15:08:25 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73145 00:12:47.418 killing process with pid 73145 00:12:47.418 15:08:25 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:47.418 15:08:25 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:47.418 15:08:25 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73145' 00:12:47.418 15:08:25 nvme_rpc -- common/autotest_common.sh@967 -- # kill 73145 00:12:47.418 15:08:25 nvme_rpc -- common/autotest_common.sh@972 -- # wait 73145 00:12:49.950 ************************************ 00:12:49.950 END TEST nvme_rpc 00:12:49.950 ************************************ 00:12:49.950 00:12:49.950 real 0m4.925s 00:12:49.950 user 0m8.901s 00:12:49.950 sys 0m0.661s 00:12:49.950 15:08:27 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:49.950 15:08:27 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.950 15:08:28 -- common/autotest_common.sh@1142 -- # return 0 00:12:49.950 15:08:28 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:49.950 15:08:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:49.950 15:08:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.950 15:08:28 -- common/autotest_common.sh@10 -- # set +x 00:12:49.950 ************************************ 00:12:49.950 START TEST nvme_rpc_timeouts 00:12:49.950 ************************************ 00:12:49.950 15:08:28 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:50.209 * Looking for test storage... 00:12:50.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:50.209 15:08:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:50.209 15:08:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_73227 00:12:50.209 15:08:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_73227 00:12:50.209 15:08:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=73255 00:12:50.209 15:08:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:50.209 15:08:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:50.209 15:08:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 73255 00:12:50.209 15:08:28 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 73255 ']' 00:12:50.209 15:08:28 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.209 15:08:28 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:50.209 15:08:28 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.209 15:08:28 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:50.209 15:08:28 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:50.209 [2024-07-15 15:08:28.246548] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:12:50.209 [2024-07-15 15:08:28.246762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73255 ] 00:12:50.468 [2024-07-15 15:08:28.410196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:50.727 [2024-07-15 15:08:28.653982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.727 [2024-07-15 15:08:28.654055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.666 15:08:29 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:51.666 15:08:29 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:12:51.666 15:08:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:51.666 Checking default timeout settings: 00:12:51.666 15:08:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:51.925 15:08:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:51.925 Making settings changes with rpc: 00:12:51.925 15:08:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:52.184 Check default vs. modified settings: 00:12:52.184 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:52.184 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_73227 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_73227 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:52.443 Setting action_on_timeout is changed as expected. 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_73227 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_73227 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:52.443 Setting timeout_us is changed as expected. 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_73227 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_73227 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:52.443 Setting timeout_admin_us is changed as expected. 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_73227 /tmp/settings_modified_73227 00:12:52.443 15:08:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 73255 00:12:52.443 15:08:30 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 73255 ']' 00:12:52.443 15:08:30 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 73255 00:12:52.443 15:08:30 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:12:52.443 15:08:30 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:52.443 15:08:30 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73255 00:12:52.701 killing process with pid 73255 00:12:52.702 15:08:30 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:52.702 15:08:30 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:52.702 15:08:30 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73255' 00:12:52.702 15:08:30 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 73255 00:12:52.702 15:08:30 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 73255 00:12:55.235 RPC TIMEOUT SETTING TEST PASSED. 00:12:55.235 15:08:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:55.235 ************************************ 00:12:55.235 END TEST nvme_rpc_timeouts 00:12:55.235 ************************************ 00:12:55.235 00:12:55.235 real 0m5.139s 00:12:55.235 user 0m9.540s 00:12:55.235 sys 0m0.656s 00:12:55.235 15:08:33 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:55.235 15:08:33 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:55.235 15:08:33 -- common/autotest_common.sh@1142 -- # return 0 00:12:55.235 15:08:33 -- spdk/autotest.sh@243 -- # uname -s 00:12:55.235 15:08:33 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:12:55.235 15:08:33 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:55.235 15:08:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:55.235 15:08:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.235 15:08:33 -- common/autotest_common.sh@10 -- # set +x 00:12:55.235 ************************************ 00:12:55.235 START TEST sw_hotplug 00:12:55.235 ************************************ 00:12:55.235 15:08:33 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:55.235 * Looking for test storage... 00:12:55.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:55.493 15:08:33 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:55.749 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:56.008 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:56.008 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:56.008 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:56.008 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:56.008 15:08:34 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:12:56.008 15:08:34 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:12:56.008 15:08:34 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:12:56.008 15:08:34 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:12:56.008 15:08:34 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:12:56.008 15:08:34 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:12:56.008 15:08:34 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:12:56.008 15:08:34 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:12:56.008 15:08:34 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:12:56.008 15:08:34 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:12:56.008 15:08:34 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:12:56.008 15:08:34 sw_hotplug -- scripts/common.sh@230 -- # local class 00:12:56.008 15:08:34 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:12:56.266 15:08:34 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:56.266 15:08:34 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:12:56.266 15:08:34 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:12:56.266 15:08:34 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:56.831 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:56.831 Waiting for block devices as requested 00:12:57.088 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:57.088 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:57.088 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:57.347 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:02.637 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:02.637 15:08:40 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:13:02.637 15:08:40 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:02.896 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:13:02.896 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:02.896 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:13:03.163 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:13:03.435 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:03.435 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:03.694 15:08:41 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:13:03.694 15:08:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.694 15:08:41 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:13:03.694 15:08:41 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:13:03.694 15:08:41 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=74127 00:13:03.694 15:08:41 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:13:03.694 15:08:41 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:03.694 15:08:41 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:13:03.694 15:08:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:13:03.694 15:08:41 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:13:03.694 15:08:41 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:13:03.694 15:08:41 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:13:03.694 15:08:41 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:13:03.694 15:08:41 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:13:03.694 15:08:41 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:03.694 15:08:41 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:03.694 15:08:41 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:13:03.694 15:08:41 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:03.694 15:08:41 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:03.952 Initializing NVMe Controllers 00:13:03.952 Attaching to 0000:00:10.0 00:13:03.952 Attaching to 0000:00:11.0 00:13:03.952 Attached to 0000:00:10.0 00:13:03.952 Attached to 0000:00:11.0 00:13:03.952 Initialization complete. Starting I/O... 00:13:03.952 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:13:03.952 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:13:03.952 00:13:04.889 QEMU NVMe Ctrl (12340 ): 1848 I/Os completed (+1848) 00:13:04.889 QEMU NVMe Ctrl (12341 ): 1853 I/Os completed (+1853) 00:13:04.889 00:13:06.265 QEMU NVMe Ctrl (12340 ): 4328 I/Os completed (+2480) 00:13:06.265 QEMU NVMe Ctrl (12341 ): 4346 I/Os completed (+2493) 00:13:06.265 00:13:07.201 QEMU NVMe Ctrl (12340 ): 6876 I/Os completed (+2548) 00:13:07.201 QEMU NVMe Ctrl (12341 ): 6905 I/Os completed (+2559) 00:13:07.201 00:13:08.137 QEMU NVMe Ctrl (12340 ): 9372 I/Os completed (+2496) 00:13:08.137 QEMU NVMe Ctrl (12341 ): 9423 I/Os completed (+2518) 00:13:08.137 00:13:09.074 QEMU NVMe Ctrl (12340 ): 11912 I/Os completed (+2540) 00:13:09.074 QEMU NVMe Ctrl (12341 ): 11968 I/Os completed (+2545) 00:13:09.074 00:13:10.011 15:08:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:10.011 15:08:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:10.011 15:08:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:10.011 [2024-07-15 15:08:47.758924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:10.011 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:10.011 [2024-07-15 15:08:47.761799] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.011 [2024-07-15 15:08:47.761957] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.011 [2024-07-15 15:08:47.762091] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.011 [2024-07-15 15:08:47.762194] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.011 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:10.011 [2024-07-15 15:08:47.766124] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.011 [2024-07-15 15:08:47.766227] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.011 [2024-07-15 15:08:47.766290] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.011 [2024-07-15 15:08:47.766349] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.011 15:08:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:10.011 15:08:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:10.011 [2024-07-15 15:08:47.793531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:10.011 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:10.011 [2024-07-15 15:08:47.794921] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.011 [2024-07-15 15:08:47.795048] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.011 [2024-07-15 15:08:47.795110] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.011 [2024-07-15 15:08:47.795172] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.011 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:10.011 [2024-07-15 15:08:47.797417] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.011 [2024-07-15 15:08:47.797489] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.012 [2024-07-15 15:08:47.797543] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.012 [2024-07-15 15:08:47.797593] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.012 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:10.012 15:08:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:10.012 15:08:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:10.012 EAL: Scan for (pci) bus failed. 00:13:10.012 15:08:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:10.012 15:08:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:10.012 15:08:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:10.012 00:13:10.012 15:08:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:10.012 15:08:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:10.012 15:08:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:10.012 15:08:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:10.012 15:08:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:10.012 Attaching to 0000:00:10.0 00:13:10.012 Attached to 0000:00:10.0 00:13:10.012 15:08:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:10.012 15:08:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:10.012 15:08:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:10.012 Attaching to 0000:00:11.0 00:13:10.012 Attached to 0000:00:11.0 00:13:10.949 QEMU NVMe Ctrl (12340 ): 2476 I/Os completed (+2476) 00:13:10.949 QEMU NVMe Ctrl (12341 ): 2241 I/Os completed (+2241) 00:13:10.949 00:13:11.887 QEMU NVMe Ctrl (12340 ): 5080 I/Os completed (+2604) 00:13:11.887 QEMU NVMe Ctrl (12341 ): 4851 I/Os completed (+2610) 00:13:11.887 00:13:13.265 QEMU NVMe Ctrl (12340 ): 7610 I/Os completed (+2530) 00:13:13.265 QEMU NVMe Ctrl (12341 ): 7397 I/Os completed (+2546) 00:13:13.265 00:13:13.863 QEMU NVMe Ctrl (12340 ): 10082 I/Os completed (+2472) 00:13:13.863 QEMU NVMe Ctrl (12341 ): 9881 I/Os completed (+2484) 00:13:13.863 00:13:15.257 QEMU NVMe Ctrl (12340 ): 12522 I/Os completed (+2440) 00:13:15.257 QEMU NVMe Ctrl (12341 ): 12339 I/Os completed (+2458) 00:13:15.257 00:13:16.191 QEMU NVMe Ctrl (12340 ): 15023 I/Os completed (+2501) 00:13:16.191 QEMU NVMe Ctrl (12341 ): 14865 I/Os completed (+2526) 00:13:16.191 00:13:17.129 QEMU NVMe Ctrl (12340 ): 17479 I/Os completed (+2456) 00:13:17.129 QEMU NVMe Ctrl (12341 ): 17341 I/Os completed (+2476) 00:13:17.129 00:13:18.082 QEMU NVMe Ctrl (12340 ): 19867 I/Os completed (+2388) 00:13:18.082 QEMU NVMe Ctrl (12341 ): 19866 I/Os completed (+2525) 00:13:18.082 00:13:19.021 QEMU NVMe Ctrl (12340 ): 22204 I/Os completed (+2337) 00:13:19.021 QEMU NVMe Ctrl (12341 ): 22261 I/Os completed (+2395) 00:13:19.021 00:13:19.958 QEMU NVMe Ctrl (12340 ): 24788 I/Os completed (+2584) 00:13:19.958 QEMU NVMe Ctrl (12341 ): 24845 I/Os completed (+2584) 00:13:19.958 00:13:20.897 QEMU NVMe Ctrl (12340 ): 27236 I/Os completed (+2448) 00:13:20.897 QEMU NVMe Ctrl (12341 ): 27299 I/Os completed (+2454) 00:13:20.897 00:13:21.833 QEMU NVMe Ctrl (12340 ): 29539 I/Os completed (+2303) 00:13:21.833 QEMU NVMe Ctrl (12341 ): 29629 I/Os completed (+2330) 00:13:21.833 00:13:22.092 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:22.092 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:22.092 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:22.092 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:22.092 [2024-07-15 15:09:00.084883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:22.092 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:22.092 [2024-07-15 15:09:00.086314] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 [2024-07-15 15:09:00.086376] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 [2024-07-15 15:09:00.086401] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 [2024-07-15 15:09:00.086424] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:22.092 [2024-07-15 15:09:00.088943] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 [2024-07-15 15:09:00.088997] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 [2024-07-15 15:09:00.089013] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 [2024-07-15 15:09:00.089028] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:22.092 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:22.092 [2024-07-15 15:09:00.121138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:22.092 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:22.092 [2024-07-15 15:09:00.122406] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 [2024-07-15 15:09:00.122458] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 [2024-07-15 15:09:00.122483] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 [2024-07-15 15:09:00.122502] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:22.092 [2024-07-15 15:09:00.124682] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 [2024-07-15 15:09:00.124723] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 [2024-07-15 15:09:00.124756] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 [2024-07-15 15:09:00.124773] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.092 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:22.092 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:22.092 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:22.092 EAL: Scan for (pci) bus failed. 00:13:22.092 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:22.092 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:22.092 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:22.352 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:22.352 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:22.352 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:22.352 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:22.352 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:22.352 Attaching to 0000:00:10.0 00:13:22.352 Attached to 0000:00:10.0 00:13:22.352 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:22.352 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:22.352 15:09:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:22.352 Attaching to 0000:00:11.0 00:13:22.352 Attached to 0000:00:11.0 00:13:22.920 QEMU NVMe Ctrl (12340 ): 1424 I/Os completed (+1424) 00:13:22.921 QEMU NVMe Ctrl (12341 ): 1221 I/Os completed (+1221) 00:13:22.921 00:13:23.856 QEMU NVMe Ctrl (12340 ): 3848 I/Os completed (+2424) 00:13:23.856 QEMU NVMe Ctrl (12341 ): 3645 I/Os completed (+2424) 00:13:23.856 00:13:25.236 QEMU NVMe Ctrl (12340 ): 6432 I/Os completed (+2584) 00:13:25.236 QEMU NVMe Ctrl (12341 ): 6232 I/Os completed (+2587) 00:13:25.236 00:13:25.840 QEMU NVMe Ctrl (12340 ): 8876 I/Os completed (+2444) 00:13:25.840 QEMU NVMe Ctrl (12341 ): 8705 I/Os completed (+2473) 00:13:25.840 00:13:27.229 QEMU NVMe Ctrl (12340 ): 11424 I/Os completed (+2548) 00:13:27.229 QEMU NVMe Ctrl (12341 ): 11264 I/Os completed (+2559) 00:13:27.229 00:13:28.166 QEMU NVMe Ctrl (12340 ): 14076 I/Os completed (+2652) 00:13:28.166 QEMU NVMe Ctrl (12341 ): 13916 I/Os completed (+2652) 00:13:28.166 00:13:29.102 QEMU NVMe Ctrl (12340 ): 16646 I/Os completed (+2570) 00:13:29.102 QEMU NVMe Ctrl (12341 ): 16485 I/Os completed (+2569) 00:13:29.102 00:13:30.038 QEMU NVMe Ctrl (12340 ): 19144 I/Os completed (+2498) 00:13:30.038 QEMU NVMe Ctrl (12341 ): 18963 I/Os completed (+2478) 00:13:30.038 00:13:30.975 QEMU NVMe Ctrl (12340 ): 21543 I/Os completed (+2399) 00:13:30.975 QEMU NVMe Ctrl (12341 ): 21572 I/Os completed (+2609) 00:13:30.975 00:13:31.913 QEMU NVMe Ctrl (12340 ): 23988 I/Os completed (+2445) 00:13:31.913 QEMU NVMe Ctrl (12341 ): 24164 I/Os completed (+2592) 00:13:31.913 00:13:32.851 QEMU NVMe Ctrl (12340 ): 26417 I/Os completed (+2429) 00:13:32.851 QEMU NVMe Ctrl (12341 ): 26767 I/Os completed (+2603) 00:13:32.851 00:13:34.227 QEMU NVMe Ctrl (12340 ): 28857 I/Os completed (+2440) 00:13:34.227 QEMU NVMe Ctrl (12341 ): 29208 I/Os completed (+2441) 00:13:34.227 00:13:34.485 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:34.485 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:34.485 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:34.485 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:34.485 [2024-07-15 15:09:12.399939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:34.485 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:34.485 [2024-07-15 15:09:12.401562] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 [2024-07-15 15:09:12.401666] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 [2024-07-15 15:09:12.401708] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 [2024-07-15 15:09:12.401752] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:34.485 [2024-07-15 15:09:12.404504] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 [2024-07-15 15:09:12.404583] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 [2024-07-15 15:09:12.404630] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 [2024-07-15 15:09:12.404676] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:34.485 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:34.485 [2024-07-15 15:09:12.437302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:34.485 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:34.485 [2024-07-15 15:09:12.438767] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 [2024-07-15 15:09:12.438866] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 [2024-07-15 15:09:12.438916] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 [2024-07-15 15:09:12.438959] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:34.485 [2024-07-15 15:09:12.441551] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 [2024-07-15 15:09:12.441627] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 [2024-07-15 15:09:12.441678] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 [2024-07-15 15:09:12.441719] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.485 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:34.485 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:34.485 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:34.485 EAL: Scan for (pci) bus failed. 00:13:34.485 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:34.485 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:34.485 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:34.744 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:34.744 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:34.744 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:34.744 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:34.744 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:34.744 Attaching to 0000:00:10.0 00:13:34.744 Attached to 0000:00:10.0 00:13:34.744 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:34.744 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:34.744 15:09:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:34.744 Attaching to 0000:00:11.0 00:13:34.744 Attached to 0000:00:11.0 00:13:34.744 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:34.744 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:34.744 [2024-07-15 15:09:12.725831] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:13:46.964 15:09:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:46.964 15:09:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:46.964 15:09:24 sw_hotplug -- common/autotest_common.sh@715 -- # time=42.97 00:13:46.964 15:09:24 sw_hotplug -- common/autotest_common.sh@716 -- # echo 42.97 00:13:46.964 15:09:24 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:13:46.964 15:09:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.97 00:13:46.964 15:09:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.97 2 00:13:46.964 remove_attach_helper took 42.97s to complete (handling 2 nvme drive(s)) 15:09:24 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:13:53.572 15:09:30 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 74127 00:13:53.572 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (74127) - No such process 00:13:53.572 15:09:30 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 74127 00:13:53.572 15:09:30 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:13:53.572 15:09:30 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:13:53.572 15:09:30 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:13:53.572 15:09:30 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=74662 00:13:53.572 15:09:30 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:53.572 15:09:30 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:13:53.572 15:09:30 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 74662 00:13:53.572 15:09:30 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 74662 ']' 00:13:53.572 15:09:30 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.572 15:09:30 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.572 15:09:30 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.572 15:09:30 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.572 15:09:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:53.572 [2024-07-15 15:09:30.834276] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:13:53.572 [2024-07-15 15:09:30.834507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74662 ] 00:13:53.572 [2024-07-15 15:09:30.996019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.572 [2024-07-15 15:09:31.263706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.510 15:09:32 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.510 15:09:32 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:13:54.510 15:09:32 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:54.510 15:09:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.510 15:09:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:54.510 15:09:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.510 15:09:32 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:13:54.510 15:09:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:54.510 15:09:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:54.510 15:09:32 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:13:54.510 15:09:32 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:13:54.510 15:09:32 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:13:54.510 15:09:32 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:13:54.510 15:09:32 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:13:54.510 15:09:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:54.510 15:09:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:54.510 15:09:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:54.510 15:09:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:54.510 15:09:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:01.084 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:01.084 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:01.084 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:01.084 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:01.084 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:01.084 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:01.084 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:01.084 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:01.084 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:01.084 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:01.084 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:01.084 15:09:38 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.085 15:09:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:01.085 [2024-07-15 15:09:38.406424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:01.085 [2024-07-15 15:09:38.408765] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:01.085 [2024-07-15 15:09:38.408815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.085 [2024-07-15 15:09:38.408848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.085 [2024-07-15 15:09:38.408872] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:01.085 [2024-07-15 15:09:38.408887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.085 [2024-07-15 15:09:38.408897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.085 [2024-07-15 15:09:38.408910] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:01.085 [2024-07-15 15:09:38.408920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.085 [2024-07-15 15:09:38.408931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.085 [2024-07-15 15:09:38.408941] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:01.085 [2024-07-15 15:09:38.408955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.085 [2024-07-15 15:09:38.408964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.085 15:09:38 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.085 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:01.085 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:01.085 [2024-07-15 15:09:38.905486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:01.085 [2024-07-15 15:09:38.907659] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:01.085 [2024-07-15 15:09:38.907721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.085 [2024-07-15 15:09:38.907736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.085 [2024-07-15 15:09:38.907761] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:01.085 [2024-07-15 15:09:38.907770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.085 [2024-07-15 15:09:38.907782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.085 [2024-07-15 15:09:38.907792] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:01.085 [2024-07-15 15:09:38.907802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.085 [2024-07-15 15:09:38.907810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.085 [2024-07-15 15:09:38.907821] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:01.085 [2024-07-15 15:09:38.907829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.085 [2024-07-15 15:09:38.907838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.085 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:01.085 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:01.085 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:01.085 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:01.085 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:01.085 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:01.085 15:09:38 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.085 15:09:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:01.085 15:09:38 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.085 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:01.085 15:09:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:01.085 15:09:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:01.085 15:09:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:01.085 15:09:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:01.085 15:09:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:01.085 15:09:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:01.085 15:09:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:01.085 15:09:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:01.085 15:09:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:01.352 15:09:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:01.352 15:09:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:01.352 15:09:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:13.560 15:09:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.560 15:09:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:13.560 15:09:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:13.560 15:09:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:13.560 15:09:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:13.560 [2024-07-15 15:09:51.381608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:13.560 [2024-07-15 15:09:51.383971] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:13.560 [2024-07-15 15:09:51.384071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.560 [2024-07-15 15:09:51.384130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.560 [2024-07-15 15:09:51.384195] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:13.560 [2024-07-15 15:09:51.384211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.560 [2024-07-15 15:09:51.384221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.560 [2024-07-15 15:09:51.384234] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:13.560 [2024-07-15 15:09:51.384245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.560 [2024-07-15 15:09:51.384258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.560 [2024-07-15 15:09:51.384268] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:13.560 [2024-07-15 15:09:51.384280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.560 [2024-07-15 15:09:51.384290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.560 15:09:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:13.560 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:13.819 [2024-07-15 15:09:51.880684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:13.819 [2024-07-15 15:09:51.883250] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:13.819 [2024-07-15 15:09:51.883364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.819 [2024-07-15 15:09:51.883426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.819 [2024-07-15 15:09:51.883499] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:13.819 [2024-07-15 15:09:51.883561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.819 [2024-07-15 15:09:51.883602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.819 [2024-07-15 15:09:51.883668] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:13.819 [2024-07-15 15:09:51.883696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.819 [2024-07-15 15:09:51.883743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.819 [2024-07-15 15:09:51.883806] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:13.819 [2024-07-15 15:09:51.883838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.819 [2024-07-15 15:09:51.883890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.819 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:13.819 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:13.819 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:13.819 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:13.819 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:13.819 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:14.078 15:09:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.078 15:09:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:14.078 15:09:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.078 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:14.078 15:09:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:14.078 15:09:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:14.078 15:09:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:14.078 15:09:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:14.078 15:09:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:14.335 15:09:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:14.335 15:09:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:14.335 15:09:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:14.335 15:09:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:14.335 15:09:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:14.335 15:09:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:14.335 15:09:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:26.560 15:10:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.560 15:10:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:26.560 15:10:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:26.560 15:10:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.560 15:10:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:26.560 15:10:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.560 [2024-07-15 15:10:04.456660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:26.560 [2024-07-15 15:10:04.459207] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.560 [2024-07-15 15:10:04.459310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.560 [2024-07-15 15:10:04.459379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.560 [2024-07-15 15:10:04.459479] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.560 [2024-07-15 15:10:04.459529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.560 [2024-07-15 15:10:04.459575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.560 [2024-07-15 15:10:04.459638] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.560 [2024-07-15 15:10:04.459675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.560 [2024-07-15 15:10:04.459721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.560 [2024-07-15 15:10:04.459771] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.560 [2024-07-15 15:10:04.459805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.560 [2024-07-15 15:10:04.459852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:26.560 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:27.126 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:27.126 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:27.126 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:27.126 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:27.126 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:27.126 15:10:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:27.126 15:10:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.126 15:10:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:27.126 15:10:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.126 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:27.126 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:27.126 [2024-07-15 15:10:05.055536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:27.126 [2024-07-15 15:10:05.058045] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:27.126 [2024-07-15 15:10:05.058166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.126 [2024-07-15 15:10:05.058227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.126 [2024-07-15 15:10:05.058279] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:27.126 [2024-07-15 15:10:05.058321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.126 [2024-07-15 15:10:05.058359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.126 [2024-07-15 15:10:05.058373] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:27.126 [2024-07-15 15:10:05.058386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.126 [2024-07-15 15:10:05.058396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.126 [2024-07-15 15:10:05.058412] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:27.126 [2024-07-15 15:10:05.058422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.126 [2024-07-15 15:10:05.058435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:27.694 15:10:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.694 15:10:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:27.694 15:10:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:27.694 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:27.954 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:27.954 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:27.954 15:10:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.61 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.61 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.61 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.61 2 00:14:40.186 remove_attach_helper took 45.61s to complete (handling 2 nvme drive(s)) 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:14:40.186 15:10:17 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:40.186 15:10:17 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:46.741 15:10:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:46.741 15:10:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:46.741 15:10:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:46.741 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:46.741 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:46.741 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:46.741 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:46.741 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:46.741 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:46.741 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:46.741 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:46.741 15:10:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.741 15:10:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:46.741 [2024-07-15 15:10:24.054309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:46.741 [2024-07-15 15:10:24.055963] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:46.741 [2024-07-15 15:10:24.056013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.741 [2024-07-15 15:10:24.056032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.741 [2024-07-15 15:10:24.056056] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:46.741 [2024-07-15 15:10:24.056068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.741 [2024-07-15 15:10:24.056078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.741 [2024-07-15 15:10:24.056090] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:46.741 [2024-07-15 15:10:24.056100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.741 [2024-07-15 15:10:24.056111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.741 [2024-07-15 15:10:24.056121] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:46.741 [2024-07-15 15:10:24.056132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.741 [2024-07-15 15:10:24.056142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.741 15:10:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:46.742 [2024-07-15 15:10:24.453579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:46.742 [2024-07-15 15:10:24.455773] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:46.742 [2024-07-15 15:10:24.455822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.742 [2024-07-15 15:10:24.455836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.742 [2024-07-15 15:10:24.455861] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:46.742 [2024-07-15 15:10:24.455870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.742 [2024-07-15 15:10:24.455882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.742 [2024-07-15 15:10:24.455891] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:46.742 [2024-07-15 15:10:24.455901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.742 [2024-07-15 15:10:24.455910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.742 [2024-07-15 15:10:24.455921] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:46.742 [2024-07-15 15:10:24.455930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.742 [2024-07-15 15:10:24.455939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:46.742 15:10:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.742 15:10:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:46.742 15:10:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:46.742 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:47.000 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:47.000 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:47.000 15:10:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:59.200 15:10:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:59.200 15:10:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:59.200 15:10:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:59.200 15:10:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:59.200 15:10:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:59.200 15:10:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:59.200 15:10:36 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.200 15:10:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:59.200 15:10:36 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.200 15:10:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:59.200 15:10:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:59.200 15:10:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:59.200 15:10:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:59.200 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:59.200 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:59.200 [2024-07-15 15:10:37.029615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:59.200 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:59.200 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:59.200 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:59.200 EAL: Scan for (pci) bus failed. 00:14:59.200 [2024-07-15 15:10:37.031940] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.200 [2024-07-15 15:10:37.032069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.200 [2024-07-15 15:10:37.032110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.200 [2024-07-15 15:10:37.032182] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.200 [2024-07-15 15:10:37.032205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.200 [2024-07-15 15:10:37.032218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.200 [2024-07-15 15:10:37.032232] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.200 [2024-07-15 15:10:37.032248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.200 [2024-07-15 15:10:37.032261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.200 [2024-07-15 15:10:37.032272] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.200 [2024-07-15 15:10:37.032289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.200 [2024-07-15 15:10:37.032301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.200 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:59.200 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:59.200 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:59.200 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:59.200 15:10:37 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.200 15:10:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:59.200 15:10:37 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.200 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:59.200 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:59.765 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:59.765 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:59.765 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:59.765 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:59.765 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:59.765 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:59.765 15:10:37 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.765 15:10:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:59.765 15:10:37 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.765 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:59.765 15:10:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:59.765 [2024-07-15 15:10:37.728242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:59.765 [2024-07-15 15:10:37.730569] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.765 [2024-07-15 15:10:37.730624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.765 [2024-07-15 15:10:37.730642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.765 [2024-07-15 15:10:37.730669] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.765 [2024-07-15 15:10:37.730681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.765 [2024-07-15 15:10:37.730696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.765 [2024-07-15 15:10:37.730708] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.765 [2024-07-15 15:10:37.730721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.765 [2024-07-15 15:10:37.730732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.765 [2024-07-15 15:10:37.730744] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.765 [2024-07-15 15:10:37.730754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.765 [2024-07-15 15:10:37.730766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.765 [2024-07-15 15:10:37.730781] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:14:59.765 [2024-07-15 15:10:37.730795] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:14:59.765 [2024-07-15 15:10:37.730805] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:14:59.765 [2024-07-15 15:10:37.730818] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:15:00.022 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:00.022 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:00.280 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:00.280 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:00.280 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:00.280 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:00.280 15:10:38 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.280 15:10:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:00.280 15:10:38 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.280 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:00.280 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:00.280 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:00.280 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:00.280 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:00.280 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:00.280 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:00.280 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:00.280 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:00.280 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:00.538 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:00.538 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:00.538 15:10:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:12.757 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:12.757 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:12.757 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:12.758 15:10:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.758 15:10:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:12.758 15:10:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:12.758 15:10:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.758 15:10:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:12.758 [2024-07-15 15:10:50.603923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:12.758 [2024-07-15 15:10:50.606184] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.758 [2024-07-15 15:10:50.606228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.758 [2024-07-15 15:10:50.606255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.758 [2024-07-15 15:10:50.606279] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.758 [2024-07-15 15:10:50.606295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.758 [2024-07-15 15:10:50.606306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.758 [2024-07-15 15:10:50.606319] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.758 [2024-07-15 15:10:50.606329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.758 [2024-07-15 15:10:50.606341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.758 [2024-07-15 15:10:50.606352] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:12.758 [2024-07-15 15:10:50.606363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.758 [2024-07-15 15:10:50.606373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.758 15:10:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:12.758 15:10:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:13.017 [2024-07-15 15:10:51.003170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:13.017 [2024-07-15 15:10:51.004692] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:13.017 [2024-07-15 15:10:51.004739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.017 [2024-07-15 15:10:51.004755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.017 [2024-07-15 15:10:51.004779] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:13.017 [2024-07-15 15:10:51.004790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.017 [2024-07-15 15:10:51.004802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.017 [2024-07-15 15:10:51.004812] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:13.017 [2024-07-15 15:10:51.004827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.017 [2024-07-15 15:10:51.004836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.017 [2024-07-15 15:10:51.004848] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:13.017 [2024-07-15 15:10:51.004857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.017 [2024-07-15 15:10:51.004868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.276 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:13.276 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:13.276 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:13.276 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:13.276 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:13.276 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:13.276 15:10:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.276 15:10:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:13.276 15:10:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.276 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:13.276 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:13.276 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:13.276 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:13.276 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:13.535 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:13.535 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:13.535 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:13.535 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:13.535 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:13.535 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:13.535 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:13.535 15:10:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:25.757 15:11:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:25.757 15:11:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:25.757 15:11:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:25.757 15:11:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:25.757 15:11:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:25.757 15:11:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.757 15:11:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:25.757 15:11:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.61 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.61 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:15:25.757 15:11:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.61 00:15:25.757 15:11:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.61 2 00:15:25.757 remove_attach_helper took 45.61s to complete (handling 2 nvme drive(s)) 15:11:03 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:15:25.757 15:11:03 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 74662 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 74662 ']' 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 74662 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74662 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74662' 00:15:25.757 killing process with pid 74662 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@967 -- # kill 74662 00:15:25.757 15:11:03 sw_hotplug -- common/autotest_common.sh@972 -- # wait 74662 00:15:28.291 15:11:06 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:28.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:29.428 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:29.428 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:29.428 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:29.428 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:29.428 00:15:29.429 real 2m34.278s 00:15:29.429 user 1m55.036s 00:15:29.429 sys 0m19.192s 00:15:29.429 15:11:07 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:29.429 15:11:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:29.429 ************************************ 00:15:29.429 END TEST sw_hotplug 00:15:29.429 ************************************ 00:15:29.688 15:11:07 -- common/autotest_common.sh@1142 -- # return 0 00:15:29.688 15:11:07 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:15:29.688 15:11:07 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:29.688 15:11:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:29.688 15:11:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:29.688 15:11:07 -- common/autotest_common.sh@10 -- # set +x 00:15:29.688 ************************************ 00:15:29.688 START TEST nvme_xnvme 00:15:29.688 ************************************ 00:15:29.688 15:11:07 nvme_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:29.688 * Looking for test storage... 00:15:29.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:29.688 15:11:07 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:29.688 15:11:07 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.688 15:11:07 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.688 15:11:07 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.688 15:11:07 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.688 15:11:07 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.688 15:11:07 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.688 15:11:07 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:29.688 15:11:07 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.688 15:11:07 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:15:29.688 15:11:07 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:29.688 15:11:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:29.688 15:11:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.688 ************************************ 00:15:29.688 START TEST xnvme_to_malloc_dd_copy 00:15:29.688 ************************************ 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1123 -- # malloc_to_xnvme_copy 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:29.688 15:11:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:29.947 { 00:15:29.947 "subsystems": [ 00:15:29.947 { 00:15:29.947 "subsystem": "bdev", 00:15:29.947 "config": [ 00:15:29.947 { 00:15:29.947 "params": { 00:15:29.947 "block_size": 512, 00:15:29.947 "num_blocks": 2097152, 00:15:29.947 "name": "malloc0" 00:15:29.947 }, 00:15:29.947 "method": "bdev_malloc_create" 00:15:29.947 }, 00:15:29.947 { 00:15:29.947 "params": { 00:15:29.947 "io_mechanism": "libaio", 00:15:29.947 "filename": "/dev/nullb0", 00:15:29.947 "name": "null0" 00:15:29.947 }, 00:15:29.947 "method": "bdev_xnvme_create" 00:15:29.947 }, 00:15:29.947 { 00:15:29.947 "method": "bdev_wait_for_examine" 00:15:29.947 } 00:15:29.947 ] 00:15:29.947 } 00:15:29.947 ] 00:15:29.947 } 00:15:29.947 [2024-07-15 15:11:07.833252] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:15:29.947 [2024-07-15 15:11:07.833457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76035 ] 00:15:29.947 [2024-07-15 15:11:07.999847] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.206 [2024-07-15 15:11:08.231236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.287  Copying: 253/1024 [MB] (253 MBps) Copying: 502/1024 [MB] (248 MBps) Copying: 752/1024 [MB] (249 MBps) Copying: 1009/1024 [MB] (257 MBps) Copying: 1024/1024 [MB] (average 252 MBps) 00:15:40.287 00:15:40.287 15:11:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:15:40.287 15:11:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:15:40.287 15:11:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:40.287 15:11:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:40.287 { 00:15:40.287 "subsystems": [ 00:15:40.287 { 00:15:40.287 "subsystem": "bdev", 00:15:40.287 "config": [ 00:15:40.287 { 00:15:40.287 "params": { 00:15:40.287 "block_size": 512, 00:15:40.287 "num_blocks": 2097152, 00:15:40.287 "name": "malloc0" 00:15:40.287 }, 00:15:40.287 "method": "bdev_malloc_create" 00:15:40.287 }, 00:15:40.287 { 00:15:40.287 "params": { 00:15:40.287 "io_mechanism": "libaio", 00:15:40.287 "filename": "/dev/nullb0", 00:15:40.287 "name": "null0" 00:15:40.287 }, 00:15:40.287 "method": "bdev_xnvme_create" 00:15:40.287 }, 00:15:40.287 { 00:15:40.287 "method": "bdev_wait_for_examine" 00:15:40.287 } 00:15:40.287 ] 00:15:40.287 } 00:15:40.287 ] 00:15:40.287 } 00:15:40.287 [2024-07-15 15:11:18.395521] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:15:40.287 [2024-07-15 15:11:18.395666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76154 ] 00:15:40.545 [2024-07-15 15:11:18.557148] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.804 [2024-07-15 15:11:18.784659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.518  Copying: 265/1024 [MB] (265 MBps) Copying: 527/1024 [MB] (262 MBps) Copying: 790/1024 [MB] (262 MBps) Copying: 1024/1024 [MB] (average 263 MBps) 00:15:51.518 00:15:51.518 15:11:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:15:51.518 15:11:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:51.518 15:11:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:15:51.518 15:11:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:15:51.518 15:11:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:51.518 15:11:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:51.518 { 00:15:51.518 "subsystems": [ 00:15:51.518 { 00:15:51.518 "subsystem": "bdev", 00:15:51.518 "config": [ 00:15:51.518 { 00:15:51.518 "params": { 00:15:51.518 "block_size": 512, 00:15:51.518 "num_blocks": 2097152, 00:15:51.518 "name": "malloc0" 00:15:51.518 }, 00:15:51.518 "method": "bdev_malloc_create" 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "params": { 00:15:51.518 "io_mechanism": "io_uring", 00:15:51.518 "filename": "/dev/nullb0", 00:15:51.518 "name": "null0" 00:15:51.518 }, 00:15:51.518 "method": "bdev_xnvme_create" 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "method": "bdev_wait_for_examine" 00:15:51.518 } 00:15:51.518 ] 00:15:51.518 } 00:15:51.518 ] 00:15:51.518 } 00:15:51.518 [2024-07-15 15:11:28.686913] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:15:51.518 [2024-07-15 15:11:28.687113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76271 ] 00:15:51.518 [2024-07-15 15:11:28.847531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.518 [2024-07-15 15:11:29.084873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.867  Copying: 267/1024 [MB] (267 MBps) Copying: 540/1024 [MB] (272 MBps) Copying: 798/1024 [MB] (257 MBps) Copying: 1024/1024 [MB] (average 263 MBps) 00:16:01.867 00:16:01.867 15:11:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:16:01.867 15:11:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:16:01.867 15:11:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:01.867 15:11:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:01.867 { 00:16:01.867 "subsystems": [ 00:16:01.867 { 00:16:01.867 "subsystem": "bdev", 00:16:01.867 "config": [ 00:16:01.867 { 00:16:01.867 "params": { 00:16:01.867 "block_size": 512, 00:16:01.867 "num_blocks": 2097152, 00:16:01.867 "name": "malloc0" 00:16:01.867 }, 00:16:01.867 "method": "bdev_malloc_create" 00:16:01.867 }, 00:16:01.867 { 00:16:01.867 "params": { 00:16:01.867 "io_mechanism": "io_uring", 00:16:01.867 "filename": "/dev/nullb0", 00:16:01.867 "name": "null0" 00:16:01.867 }, 00:16:01.867 "method": "bdev_xnvme_create" 00:16:01.867 }, 00:16:01.867 { 00:16:01.867 "method": "bdev_wait_for_examine" 00:16:01.867 } 00:16:01.867 ] 00:16:01.867 } 00:16:01.867 ] 00:16:01.867 } 00:16:01.867 [2024-07-15 15:11:39.244381] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:16:01.867 [2024-07-15 15:11:39.244563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76394 ] 00:16:01.867 [2024-07-15 15:11:39.404452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.867 [2024-07-15 15:11:39.657341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.560  Copying: 260/1024 [MB] (260 MBps) Copying: 532/1024 [MB] (271 MBps) Copying: 797/1024 [MB] (265 MBps) Copying: 1024/1024 [MB] (average 267 MBps) 00:16:11.560 00:16:11.560 15:11:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:16:11.560 15:11:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:16:11.560 00:16:11.560 real 0m41.942s 00:16:11.560 user 0m37.859s 00:16:11.560 sys 0m3.582s 00:16:11.560 ************************************ 00:16:11.560 END TEST xnvme_to_malloc_dd_copy 00:16:11.560 ************************************ 00:16:11.560 15:11:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:11.560 15:11:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:11.820 15:11:49 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:16:11.820 15:11:49 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:11.820 15:11:49 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:11.820 15:11:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.820 15:11:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:11.820 ************************************ 00:16:11.820 START TEST xnvme_bdevperf 00:16:11.820 ************************************ 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1123 -- # xnvme_bdevperf 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:11.820 15:11:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:11.820 { 00:16:11.820 "subsystems": [ 00:16:11.820 { 00:16:11.820 "subsystem": "bdev", 00:16:11.820 "config": [ 00:16:11.820 { 00:16:11.820 "params": { 00:16:11.820 "io_mechanism": "libaio", 00:16:11.820 "filename": "/dev/nullb0", 00:16:11.820 "name": "null0" 00:16:11.820 }, 00:16:11.820 "method": "bdev_xnvme_create" 00:16:11.820 }, 00:16:11.820 { 00:16:11.820 "method": "bdev_wait_for_examine" 00:16:11.820 } 00:16:11.820 ] 00:16:11.820 } 00:16:11.820 ] 00:16:11.820 } 00:16:11.820 [2024-07-15 15:11:49.826328] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:16:11.820 [2024-07-15 15:11:49.826444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76536 ] 00:16:12.079 [2024-07-15 15:11:49.987218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.338 [2024-07-15 15:11:50.219416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.598 Running I/O for 5 seconds... 00:16:17.879 00:16:17.879 Latency(us) 00:16:17.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.879 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:17.879 null0 : 5.00 174007.08 679.72 0.00 0.00 365.27 136.83 565.21 00:16:17.879 =================================================================================================================== 00:16:17.879 Total : 174007.08 679.72 0.00 0.00 365.27 136.83 565.21 00:16:19.256 15:11:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:19.256 15:11:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:19.256 15:11:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:19.256 15:11:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:19.256 15:11:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:19.256 15:11:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:19.256 { 00:16:19.256 "subsystems": [ 00:16:19.256 { 00:16:19.256 "subsystem": "bdev", 00:16:19.256 "config": [ 00:16:19.256 { 00:16:19.256 "params": { 00:16:19.256 "io_mechanism": "io_uring", 00:16:19.256 "filename": "/dev/nullb0", 00:16:19.256 "name": "null0" 00:16:19.256 }, 00:16:19.256 "method": "bdev_xnvme_create" 00:16:19.256 }, 00:16:19.256 { 00:16:19.256 "method": "bdev_wait_for_examine" 00:16:19.256 } 00:16:19.256 ] 00:16:19.256 } 00:16:19.256 ] 00:16:19.256 } 00:16:19.256 [2024-07-15 15:11:57.118182] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:16:19.256 [2024-07-15 15:11:57.118322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76622 ] 00:16:19.256 [2024-07-15 15:11:57.280117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.518 [2024-07-15 15:11:57.521302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.085 Running I/O for 5 seconds... 00:16:25.364 00:16:25.364 Latency(us) 00:16:25.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.364 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:25.364 null0 : 5.00 203165.01 793.61 0.00 0.00 312.36 211.06 416.75 00:16:25.364 =================================================================================================================== 00:16:25.364 Total : 203165.01 793.61 0.00 0.00 312.36 211.06 416.75 00:16:26.300 15:12:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:16:26.300 15:12:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:16:26.300 00:16:26.300 real 0m14.606s 00:16:26.300 user 0m12.005s 00:16:26.300 sys 0m2.395s 00:16:26.300 ************************************ 00:16:26.300 END TEST xnvme_bdevperf 00:16:26.300 ************************************ 00:16:26.300 15:12:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:26.300 15:12:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:26.300 15:12:04 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:16:26.300 ************************************ 00:16:26.300 END TEST nvme_xnvme 00:16:26.300 ************************************ 00:16:26.300 00:16:26.300 real 0m56.793s 00:16:26.300 user 0m49.962s 00:16:26.300 sys 0m6.133s 00:16:26.300 15:12:04 nvme_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:26.300 15:12:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.559 15:12:04 -- common/autotest_common.sh@1142 -- # return 0 00:16:26.559 15:12:04 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:26.559 15:12:04 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:26.559 15:12:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.559 15:12:04 -- common/autotest_common.sh@10 -- # set +x 00:16:26.559 ************************************ 00:16:26.559 START TEST blockdev_xnvme 00:16:26.559 ************************************ 00:16:26.559 15:12:04 blockdev_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:26.559 * Looking for test storage... 00:16:26.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:26.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=76762 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:26.559 15:12:04 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 76762 00:16:26.559 15:12:04 blockdev_xnvme -- common/autotest_common.sh@829 -- # '[' -z 76762 ']' 00:16:26.559 15:12:04 blockdev_xnvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.559 15:12:04 blockdev_xnvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.559 15:12:04 blockdev_xnvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.559 15:12:04 blockdev_xnvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.560 15:12:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.560 [2024-07-15 15:12:04.654776] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:16:26.560 [2024-07-15 15:12:04.654896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76762 ] 00:16:26.818 [2024-07-15 15:12:04.818427] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.077 [2024-07-15 15:12:05.061377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.012 15:12:05 blockdev_xnvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.012 15:12:05 blockdev_xnvme -- common/autotest_common.sh@862 -- # return 0 00:16:28.012 15:12:05 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:28.012 15:12:05 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:16:28.012 15:12:05 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:28.012 15:12:05 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:28.012 15:12:05 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:28.579 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:28.579 Waiting for block devices as requested 00:16:28.853 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:28.853 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:28.853 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:29.112 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:34.376 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:16:34.376 nvme0n1 00:16:34.376 nvme1n1 00:16:34.376 nvme2n1 00:16:34.376 nvme2n2 00:16:34.376 nvme2n3 00:16:34.376 nvme3n1 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:34.376 15:12:12 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.376 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:16:34.377 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.377 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.377 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.377 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:34.377 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:34.377 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.377 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:34.377 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "283c5867-893a-4384-ab97-ed4dbb625e1a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "283c5867-893a-4384-ab97-ed4dbb625e1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "6905dd5e-55ad-457a-bbdd-00284134ab40"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6905dd5e-55ad-457a-bbdd-00284134ab40",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e40de2ad-7ceb-4837-a4a6-5ea28b789e6d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e40de2ad-7ceb-4837-a4a6-5ea28b789e6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "9e044a54-fa0a-4a31-a8de-2995987bd493"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9e044a54-fa0a-4a31-a8de-2995987bd493",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "5bb585b7-fe8f-48a8-b535-98c68ba08951"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5bb585b7-fe8f-48a8-b535-98c68ba08951",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ce4b16ca-68de-46a8-bc23-e02e261ba7a1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ce4b16ca-68de-46a8-bc23-e02e261ba7a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:34.377 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:34.377 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:34.377 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:16:34.377 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:34.377 15:12:12 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 76762 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@948 -- # '[' -z 76762 ']' 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@952 -- # kill -0 76762 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@953 -- # uname 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76762 00:16:34.377 killing process with pid 76762 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76762' 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@967 -- # kill 76762 00:16:34.377 15:12:12 blockdev_xnvme -- common/autotest_common.sh@972 -- # wait 76762 00:16:36.921 15:12:14 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:36.921 15:12:14 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:36.921 15:12:14 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:36.921 15:12:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.921 15:12:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:36.921 ************************************ 00:16:36.921 START TEST bdev_hello_world 00:16:36.921 ************************************ 00:16:36.921 15:12:14 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:36.921 [2024-07-15 15:12:15.022363] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:16:36.921 [2024-07-15 15:12:15.022478] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77138 ] 00:16:37.179 [2024-07-15 15:12:15.186572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.437 [2024-07-15 15:12:15.415935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.006 [2024-07-15 15:12:15.883264] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:38.006 [2024-07-15 15:12:15.883318] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:16:38.006 [2024-07-15 15:12:15.883338] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:38.006 [2024-07-15 15:12:15.885129] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:38.006 [2024-07-15 15:12:15.885498] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:38.006 [2024-07-15 15:12:15.885526] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:38.006 [2024-07-15 15:12:15.885690] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:38.006 00:16:38.006 [2024-07-15 15:12:15.885719] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:39.405 00:16:39.405 real 0m2.310s 00:16:39.405 user 0m1.954s 00:16:39.405 sys 0m0.241s 00:16:39.405 ************************************ 00:16:39.405 END TEST bdev_hello_world 00:16:39.405 ************************************ 00:16:39.405 15:12:17 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:39.405 15:12:17 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:39.405 15:12:17 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:16:39.405 15:12:17 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:39.405 15:12:17 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:39.405 15:12:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.405 15:12:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:39.405 ************************************ 00:16:39.405 START TEST bdev_bounds 00:16:39.405 ************************************ 00:16:39.405 15:12:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:16:39.405 Process bdevio pid: 77186 00:16:39.405 15:12:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=77186 00:16:39.405 15:12:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:39.405 15:12:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:39.405 15:12:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 77186' 00:16:39.405 15:12:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 77186 00:16:39.405 15:12:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 77186 ']' 00:16:39.405 15:12:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.405 15:12:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:39.405 15:12:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.405 15:12:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:39.405 15:12:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:39.405 [2024-07-15 15:12:17.394464] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:16:39.405 [2024-07-15 15:12:17.394677] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77186 ] 00:16:39.663 [2024-07-15 15:12:17.563674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:39.921 [2024-07-15 15:12:17.805879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.921 [2024-07-15 15:12:17.806051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.921 [2024-07-15 15:12:17.806103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.486 15:12:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.486 15:12:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:16:40.486 15:12:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:40.486 I/O targets: 00:16:40.486 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:16:40.486 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:40.486 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:40.486 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:40.486 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:40.486 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:40.486 00:16:40.486 00:16:40.486 CUnit - A unit testing framework for C - Version 2.1-3 00:16:40.486 http://cunit.sourceforge.net/ 00:16:40.486 00:16:40.486 00:16:40.486 Suite: bdevio tests on: nvme3n1 00:16:40.486 Test: blockdev write read block ...passed 00:16:40.486 Test: blockdev write zeroes read block ...passed 00:16:40.486 Test: blockdev write zeroes read no split ...passed 00:16:40.486 Test: blockdev write zeroes read split ...passed 00:16:40.486 Test: blockdev write zeroes read split partial ...passed 00:16:40.486 Test: blockdev reset ...passed 00:16:40.486 Test: blockdev write read 8 blocks ...passed 00:16:40.486 Test: blockdev write read size > 128k ...passed 00:16:40.486 Test: blockdev write read invalid size ...passed 00:16:40.486 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:40.486 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:40.486 Test: blockdev write read max offset ...passed 00:16:40.486 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:40.486 Test: blockdev writev readv 8 blocks ...passed 00:16:40.486 Test: blockdev writev readv 30 x 1block ...passed 00:16:40.486 Test: blockdev writev readv block ...passed 00:16:40.486 Test: blockdev writev readv size > 128k ...passed 00:16:40.486 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:40.486 Test: blockdev comparev and writev ...passed 00:16:40.486 Test: blockdev nvme passthru rw ...passed 00:16:40.486 Test: blockdev nvme passthru vendor specific ...passed 00:16:40.486 Test: blockdev nvme admin passthru ...passed 00:16:40.486 Test: blockdev copy ...passed 00:16:40.486 Suite: bdevio tests on: nvme2n3 00:16:40.486 Test: blockdev write read block ...passed 00:16:40.486 Test: blockdev write zeroes read block ...passed 00:16:40.486 Test: blockdev write zeroes read no split ...passed 00:16:40.743 Test: blockdev write zeroes read split ...passed 00:16:40.743 Test: blockdev write zeroes read split partial ...passed 00:16:40.743 Test: blockdev reset ...passed 00:16:40.743 Test: blockdev write read 8 blocks ...passed 00:16:40.743 Test: blockdev write read size > 128k ...passed 00:16:40.743 Test: blockdev write read invalid size ...passed 00:16:40.743 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:40.743 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:40.743 Test: blockdev write read max offset ...passed 00:16:40.743 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:40.743 Test: blockdev writev readv 8 blocks ...passed 00:16:40.743 Test: blockdev writev readv 30 x 1block ...passed 00:16:40.743 Test: blockdev writev readv block ...passed 00:16:40.743 Test: blockdev writev readv size > 128k ...passed 00:16:40.743 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:40.743 Test: blockdev comparev and writev ...passed 00:16:40.743 Test: blockdev nvme passthru rw ...passed 00:16:40.743 Test: blockdev nvme passthru vendor specific ...passed 00:16:40.743 Test: blockdev nvme admin passthru ...passed 00:16:40.743 Test: blockdev copy ...passed 00:16:40.743 Suite: bdevio tests on: nvme2n2 00:16:40.743 Test: blockdev write read block ...passed 00:16:40.743 Test: blockdev write zeroes read block ...passed 00:16:40.743 Test: blockdev write zeroes read no split ...passed 00:16:40.743 Test: blockdev write zeroes read split ...passed 00:16:40.743 Test: blockdev write zeroes read split partial ...passed 00:16:40.743 Test: blockdev reset ...passed 00:16:40.743 Test: blockdev write read 8 blocks ...passed 00:16:40.743 Test: blockdev write read size > 128k ...passed 00:16:40.743 Test: blockdev write read invalid size ...passed 00:16:40.743 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:40.743 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:40.743 Test: blockdev write read max offset ...passed 00:16:40.743 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:40.743 Test: blockdev writev readv 8 blocks ...passed 00:16:40.743 Test: blockdev writev readv 30 x 1block ...passed 00:16:40.743 Test: blockdev writev readv block ...passed 00:16:40.743 Test: blockdev writev readv size > 128k ...passed 00:16:40.743 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:40.743 Test: blockdev comparev and writev ...passed 00:16:40.743 Test: blockdev nvme passthru rw ...passed 00:16:40.743 Test: blockdev nvme passthru vendor specific ...passed 00:16:40.743 Test: blockdev nvme admin passthru ...passed 00:16:40.743 Test: blockdev copy ...passed 00:16:40.743 Suite: bdevio tests on: nvme2n1 00:16:40.743 Test: blockdev write read block ...passed 00:16:40.743 Test: blockdev write zeroes read block ...passed 00:16:40.743 Test: blockdev write zeroes read no split ...passed 00:16:40.743 Test: blockdev write zeroes read split ...passed 00:16:40.743 Test: blockdev write zeroes read split partial ...passed 00:16:40.743 Test: blockdev reset ...passed 00:16:40.743 Test: blockdev write read 8 blocks ...passed 00:16:40.743 Test: blockdev write read size > 128k ...passed 00:16:40.743 Test: blockdev write read invalid size ...passed 00:16:40.743 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:40.743 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:40.743 Test: blockdev write read max offset ...passed 00:16:40.743 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:40.743 Test: blockdev writev readv 8 blocks ...passed 00:16:40.743 Test: blockdev writev readv 30 x 1block ...passed 00:16:40.743 Test: blockdev writev readv block ...passed 00:16:40.743 Test: blockdev writev readv size > 128k ...passed 00:16:40.744 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:40.744 Test: blockdev comparev and writev ...passed 00:16:40.744 Test: blockdev nvme passthru rw ...passed 00:16:40.744 Test: blockdev nvme passthru vendor specific ...passed 00:16:40.744 Test: blockdev nvme admin passthru ...passed 00:16:40.744 Test: blockdev copy ...passed 00:16:40.744 Suite: bdevio tests on: nvme1n1 00:16:40.744 Test: blockdev write read block ...passed 00:16:40.744 Test: blockdev write zeroes read block ...passed 00:16:40.744 Test: blockdev write zeroes read no split ...passed 00:16:41.001 Test: blockdev write zeroes read split ...passed 00:16:41.001 Test: blockdev write zeroes read split partial ...passed 00:16:41.001 Test: blockdev reset ...passed 00:16:41.001 Test: blockdev write read 8 blocks ...passed 00:16:41.001 Test: blockdev write read size > 128k ...passed 00:16:41.001 Test: blockdev write read invalid size ...passed 00:16:41.001 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:41.001 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:41.001 Test: blockdev write read max offset ...passed 00:16:41.001 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:41.001 Test: blockdev writev readv 8 blocks ...passed 00:16:41.001 Test: blockdev writev readv 30 x 1block ...passed 00:16:41.001 Test: blockdev writev readv block ...passed 00:16:41.001 Test: blockdev writev readv size > 128k ...passed 00:16:41.001 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:41.001 Test: blockdev comparev and writev ...passed 00:16:41.001 Test: blockdev nvme passthru rw ...passed 00:16:41.001 Test: blockdev nvme passthru vendor specific ...passed 00:16:41.001 Test: blockdev nvme admin passthru ...passed 00:16:41.001 Test: blockdev copy ...passed 00:16:41.001 Suite: bdevio tests on: nvme0n1 00:16:41.001 Test: blockdev write read block ...passed 00:16:41.001 Test: blockdev write zeroes read block ...passed 00:16:41.001 Test: blockdev write zeroes read no split ...passed 00:16:41.001 Test: blockdev write zeroes read split ...passed 00:16:41.001 Test: blockdev write zeroes read split partial ...passed 00:16:41.001 Test: blockdev reset ...passed 00:16:41.001 Test: blockdev write read 8 blocks ...passed 00:16:41.001 Test: blockdev write read size > 128k ...passed 00:16:41.001 Test: blockdev write read invalid size ...passed 00:16:41.001 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:41.001 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:41.001 Test: blockdev write read max offset ...passed 00:16:41.001 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:41.001 Test: blockdev writev readv 8 blocks ...passed 00:16:41.001 Test: blockdev writev readv 30 x 1block ...passed 00:16:41.001 Test: blockdev writev readv block ...passed 00:16:41.001 Test: blockdev writev readv size > 128k ...passed 00:16:41.001 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:41.001 Test: blockdev comparev and writev ...passed 00:16:41.001 Test: blockdev nvme passthru rw ...passed 00:16:41.002 Test: blockdev nvme passthru vendor specific ...passed 00:16:41.002 Test: blockdev nvme admin passthru ...passed 00:16:41.002 Test: blockdev copy ...passed 00:16:41.002 00:16:41.002 Run Summary: Type Total Ran Passed Failed Inactive 00:16:41.002 suites 6 6 n/a 0 0 00:16:41.002 tests 138 138 138 0 0 00:16:41.002 asserts 780 780 780 0 n/a 00:16:41.002 00:16:41.002 Elapsed time = 1.777 seconds 00:16:41.002 0 00:16:41.002 15:12:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 77186 00:16:41.002 15:12:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 77186 ']' 00:16:41.002 15:12:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 77186 00:16:41.002 15:12:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:16:41.002 15:12:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:41.002 15:12:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77186 00:16:41.259 killing process with pid 77186 00:16:41.259 15:12:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:41.259 15:12:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:41.259 15:12:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77186' 00:16:41.259 15:12:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 77186 00:16:41.259 15:12:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 77186 00:16:42.668 15:12:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:42.668 00:16:42.668 real 0m3.183s 00:16:42.668 user 0m7.556s 00:16:42.668 sys 0m0.376s 00:16:42.668 15:12:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:42.668 ************************************ 00:16:42.668 END TEST bdev_bounds 00:16:42.668 ************************************ 00:16:42.668 15:12:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:42.668 15:12:20 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:16:42.668 15:12:20 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:16:42.668 15:12:20 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:42.668 15:12:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:42.668 15:12:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:42.668 ************************************ 00:16:42.668 START TEST bdev_nbd 00:16:42.668 ************************************ 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=77258 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 77258 /var/tmp/spdk-nbd.sock 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 77258 ']' 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:42.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:42.668 15:12:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:42.668 [2024-07-15 15:12:20.657468] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:16:42.668 [2024-07-15 15:12:20.657692] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.926 [2024-07-15 15:12:20.826455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.183 [2024-07-15 15:12:21.076149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:43.749 1+0 records in 00:16:43.749 1+0 records out 00:16:43.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692904 s, 5.9 MB/s 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:43.749 15:12:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.006 1+0 records in 00:16:44.006 1+0 records out 00:16:44.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000801162 s, 5.1 MB/s 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:44.006 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.264 1+0 records in 00:16:44.264 1+0 records out 00:16:44.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549123 s, 7.5 MB/s 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:44.264 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.523 1+0 records in 00:16:44.523 1+0 records out 00:16:44.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000807852 s, 5.1 MB/s 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:44.523 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.784 1+0 records in 00:16:44.784 1+0 records out 00:16:44.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000773462 s, 5.3 MB/s 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:44.784 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:45.047 1+0 records in 00:16:45.047 1+0 records out 00:16:45.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103664 s, 4.0 MB/s 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:45.047 15:12:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:45.306 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:45.306 { 00:16:45.306 "nbd_device": "/dev/nbd0", 00:16:45.306 "bdev_name": "nvme0n1" 00:16:45.306 }, 00:16:45.306 { 00:16:45.306 "nbd_device": "/dev/nbd1", 00:16:45.306 "bdev_name": "nvme1n1" 00:16:45.306 }, 00:16:45.306 { 00:16:45.306 "nbd_device": "/dev/nbd2", 00:16:45.306 "bdev_name": "nvme2n1" 00:16:45.306 }, 00:16:45.306 { 00:16:45.306 "nbd_device": "/dev/nbd3", 00:16:45.306 "bdev_name": "nvme2n2" 00:16:45.306 }, 00:16:45.306 { 00:16:45.306 "nbd_device": "/dev/nbd4", 00:16:45.306 "bdev_name": "nvme2n3" 00:16:45.306 }, 00:16:45.306 { 00:16:45.306 "nbd_device": "/dev/nbd5", 00:16:45.306 "bdev_name": "nvme3n1" 00:16:45.306 } 00:16:45.306 ]' 00:16:45.306 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:45.306 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:45.306 { 00:16:45.306 "nbd_device": "/dev/nbd0", 00:16:45.306 "bdev_name": "nvme0n1" 00:16:45.306 }, 00:16:45.307 { 00:16:45.307 "nbd_device": "/dev/nbd1", 00:16:45.307 "bdev_name": "nvme1n1" 00:16:45.307 }, 00:16:45.307 { 00:16:45.307 "nbd_device": "/dev/nbd2", 00:16:45.307 "bdev_name": "nvme2n1" 00:16:45.307 }, 00:16:45.307 { 00:16:45.307 "nbd_device": "/dev/nbd3", 00:16:45.307 "bdev_name": "nvme2n2" 00:16:45.307 }, 00:16:45.307 { 00:16:45.307 "nbd_device": "/dev/nbd4", 00:16:45.307 "bdev_name": "nvme2n3" 00:16:45.307 }, 00:16:45.307 { 00:16:45.307 "nbd_device": "/dev/nbd5", 00:16:45.307 "bdev_name": "nvme3n1" 00:16:45.307 } 00:16:45.307 ]' 00:16:45.307 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:45.307 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:45.307 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:45.307 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:45.307 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:45.307 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:45.307 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:45.307 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:45.566 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:45.825 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:45.826 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:45.826 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:45.826 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:45.826 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:45.826 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:45.826 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:45.826 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:45.826 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:45.826 15:12:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:46.084 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:46.084 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:46.084 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:46.084 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.084 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.084 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:46.084 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:46.084 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.085 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.085 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:46.343 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:46.343 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:46.343 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:46.343 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.343 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.343 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:46.343 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:46.343 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.343 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.343 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:46.601 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:46.860 /dev/nbd0 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:46.860 1+0 records in 00:16:46.860 1+0 records out 00:16:46.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681824 s, 6.0 MB/s 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:46.860 15:12:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:16:47.119 /dev/nbd1 00:16:47.119 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:47.119 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:47.119 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:16:47.119 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:47.119 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:47.119 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:47.120 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:16:47.120 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:47.120 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:47.120 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:47.120 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:47.120 1+0 records in 00:16:47.120 1+0 records out 00:16:47.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684168 s, 6.0 MB/s 00:16:47.120 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.120 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:47.120 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.120 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:47.120 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:47.120 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.120 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:47.120 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:16:47.378 /dev/nbd10 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:47.378 1+0 records in 00:16:47.378 1+0 records out 00:16:47.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550027 s, 7.4 MB/s 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:47.378 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:16:47.636 /dev/nbd11 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:47.636 1+0 records in 00:16:47.636 1+0 records out 00:16:47.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525286 s, 7.8 MB/s 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:47.636 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:16:47.897 /dev/nbd12 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:47.897 1+0 records in 00:16:47.897 1+0 records out 00:16:47.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622861 s, 6.6 MB/s 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:47.897 15:12:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:48.161 /dev/nbd13 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:48.161 1+0 records in 00:16:48.161 1+0 records out 00:16:48.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000698629 s, 5.9 MB/s 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:48.161 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:48.419 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:48.419 { 00:16:48.419 "nbd_device": "/dev/nbd0", 00:16:48.419 "bdev_name": "nvme0n1" 00:16:48.419 }, 00:16:48.419 { 00:16:48.419 "nbd_device": "/dev/nbd1", 00:16:48.419 "bdev_name": "nvme1n1" 00:16:48.419 }, 00:16:48.419 { 00:16:48.419 "nbd_device": "/dev/nbd10", 00:16:48.419 "bdev_name": "nvme2n1" 00:16:48.419 }, 00:16:48.419 { 00:16:48.419 "nbd_device": "/dev/nbd11", 00:16:48.419 "bdev_name": "nvme2n2" 00:16:48.419 }, 00:16:48.419 { 00:16:48.419 "nbd_device": "/dev/nbd12", 00:16:48.419 "bdev_name": "nvme2n3" 00:16:48.419 }, 00:16:48.419 { 00:16:48.419 "nbd_device": "/dev/nbd13", 00:16:48.419 "bdev_name": "nvme3n1" 00:16:48.419 } 00:16:48.419 ]' 00:16:48.419 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:48.419 { 00:16:48.419 "nbd_device": "/dev/nbd0", 00:16:48.419 "bdev_name": "nvme0n1" 00:16:48.419 }, 00:16:48.419 { 00:16:48.419 "nbd_device": "/dev/nbd1", 00:16:48.419 "bdev_name": "nvme1n1" 00:16:48.419 }, 00:16:48.419 { 00:16:48.419 "nbd_device": "/dev/nbd10", 00:16:48.419 "bdev_name": "nvme2n1" 00:16:48.419 }, 00:16:48.419 { 00:16:48.419 "nbd_device": "/dev/nbd11", 00:16:48.419 "bdev_name": "nvme2n2" 00:16:48.419 }, 00:16:48.419 { 00:16:48.419 "nbd_device": "/dev/nbd12", 00:16:48.419 "bdev_name": "nvme2n3" 00:16:48.419 }, 00:16:48.419 { 00:16:48.419 "nbd_device": "/dev/nbd13", 00:16:48.419 "bdev_name": "nvme3n1" 00:16:48.419 } 00:16:48.419 ]' 00:16:48.419 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:48.420 /dev/nbd1 00:16:48.420 /dev/nbd10 00:16:48.420 /dev/nbd11 00:16:48.420 /dev/nbd12 00:16:48.420 /dev/nbd13' 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:48.420 /dev/nbd1 00:16:48.420 /dev/nbd10 00:16:48.420 /dev/nbd11 00:16:48.420 /dev/nbd12 00:16:48.420 /dev/nbd13' 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:48.420 256+0 records in 00:16:48.420 256+0 records out 00:16:48.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013852 s, 75.7 MB/s 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:48.420 256+0 records in 00:16:48.420 256+0 records out 00:16:48.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0880449 s, 11.9 MB/s 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:48.420 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:48.678 256+0 records in 00:16:48.678 256+0 records out 00:16:48.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10325 s, 10.2 MB/s 00:16:48.678 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:48.678 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:48.678 256+0 records in 00:16:48.678 256+0 records out 00:16:48.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0814959 s, 12.9 MB/s 00:16:48.678 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:48.678 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:48.678 256+0 records in 00:16:48.678 256+0 records out 00:16:48.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0911527 s, 11.5 MB/s 00:16:48.678 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:48.678 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:48.937 256+0 records in 00:16:48.937 256+0 records out 00:16:48.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0948507 s, 11.1 MB/s 00:16:48.937 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:48.937 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:48.937 256+0 records in 00:16:48.937 256+0 records out 00:16:48.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0905946 s, 11.6 MB/s 00:16:48.937 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:48.937 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:48.937 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:48.937 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:48.937 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:48.937 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:48.937 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:48.937 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:48.937 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:48.937 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:48.937 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:48.937 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:48.937 15:12:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:48.937 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:48.937 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:48.937 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:48.937 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:48.937 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:48.937 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:48.937 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:48.937 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:48.937 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:48.937 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:48.937 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:48.937 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:48.937 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:48.937 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:49.195 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:49.195 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:49.195 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:49.195 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.195 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.195 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:49.195 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:49.195 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.195 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.195 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:49.452 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:49.452 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:49.452 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:49.453 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.453 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.453 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:49.453 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:49.453 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.453 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.453 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:49.711 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:49.711 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:49.711 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:49.711 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.711 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.711 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:49.711 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:49.711 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.711 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.711 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:49.976 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:49.976 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:49.976 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:49.976 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.976 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.976 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:49.976 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:49.976 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.976 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.976 15:12:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:49.976 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:49.976 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:49.976 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:49.976 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.976 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.976 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:49.976 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:49.976 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.976 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.976 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:50.233 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:50.233 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:50.233 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:50.234 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:50.234 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:50.234 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:50.234 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:50.234 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:50.234 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:50.234 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:50.234 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:16:50.492 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:50.752 malloc_lvol_verify 00:16:50.752 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:51.011 3196f58e-0a6a-47fd-8334-5e8509f63e93 00:16:51.011 15:12:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:51.269 412de0c8-491c-4514-82f7-3c9e5b583059 00:16:51.269 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:51.269 /dev/nbd0 00:16:51.544 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:16:51.544 mke2fs 1.46.5 (30-Dec-2021) 00:16:51.544 Discarding device blocks: 0/4096 done 00:16:51.544 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:51.544 00:16:51.544 Allocating group tables: 0/1 done 00:16:51.544 Writing inode tables: 0/1 done 00:16:51.544 Creating journal (1024 blocks): done 00:16:51.544 Writing superblocks and filesystem accounting information: 0/1 done 00:16:51.544 00:16:51.544 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 77258 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 77258 ']' 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 77258 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77258 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:51.545 killing process with pid 77258 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77258' 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 77258 00:16:51.545 15:12:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 77258 00:16:53.447 15:12:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:53.447 00:16:53.447 real 0m10.509s 00:16:53.447 user 0m14.169s 00:16:53.447 sys 0m3.675s 00:16:53.447 15:12:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:53.447 15:12:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:53.447 ************************************ 00:16:53.447 END TEST bdev_nbd 00:16:53.447 ************************************ 00:16:53.447 15:12:31 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:16:53.447 15:12:31 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:53.447 15:12:31 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:16:53.447 15:12:31 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:16:53.447 15:12:31 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:53.447 15:12:31 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:53.447 15:12:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.447 15:12:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:53.447 ************************************ 00:16:53.447 START TEST bdev_fio 00:16:53.447 ************************************ 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:53.447 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:16:53.447 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:53.448 ************************************ 00:16:53.448 START TEST bdev_fio_rw_verify 00:16:53.448 ************************************ 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:53.448 15:12:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:53.448 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:53.448 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:53.448 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:53.448 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:53.448 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:53.448 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:53.448 fio-3.35 00:16:53.448 Starting 6 threads 00:17:05.730 00:17:05.730 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=77663: Mon Jul 15 15:12:42 2024 00:17:05.730 read: IOPS=33.7k, BW=132MiB/s (138MB/s)(1318MiB/10001msec) 00:17:05.730 slat (usec): min=2, max=2624, avg= 8.37, stdev= 7.65 00:17:05.730 clat (usec): min=72, max=4794, avg=434.68, stdev=226.96 00:17:05.730 lat (usec): min=85, max=4808, avg=443.05, stdev=228.48 00:17:05.730 clat percentiles (usec): 00:17:05.730 | 50.000th=[ 396], 99.000th=[ 1090], 99.900th=[ 1795], 99.990th=[ 3851], 00:17:05.730 | 99.999th=[ 4752] 00:17:05.730 write: IOPS=34.1k, BW=133MiB/s (140MB/s)(1332MiB/10001msec); 0 zone resets 00:17:05.730 slat (usec): min=8, max=4560, avg=37.72, stdev=44.15 00:17:05.730 clat (usec): min=61, max=8159, avg=608.41, stdev=284.39 00:17:05.730 lat (usec): min=85, max=8249, avg=646.13, stdev=294.31 00:17:05.730 clat percentiles (usec): 00:17:05.730 | 50.000th=[ 578], 99.000th=[ 1418], 99.900th=[ 1975], 99.990th=[ 3884], 00:17:05.730 | 99.999th=[ 7963] 00:17:05.730 bw ( KiB/s): min=110198, max=160875, per=99.87%, avg=136227.63, stdev=2409.45, samples=114 00:17:05.730 iops : min=27548, max=40218, avg=34055.79, stdev=602.34, samples=114 00:17:05.730 lat (usec) : 100=0.01%, 250=13.85%, 500=39.59%, 750=28.63%, 1000=12.84% 00:17:05.730 lat (msec) : 2=5.00%, 4=0.08%, 10=0.01% 00:17:05.730 cpu : usr=55.25%, sys=25.62%, ctx=8577, majf=0, minf=27966 00:17:05.730 IO depths : 1=11.7%, 2=24.0%, 4=51.0%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.730 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.730 issued rwts: total=337496,341041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.730 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:05.730 00:17:05.730 Run status group 0 (all jobs): 00:17:05.730 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=1318MiB (1382MB), run=10001-10001msec 00:17:05.730 WRITE: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=1332MiB (1397MB), run=10001-10001msec 00:17:05.730 ----------------------------------------------------- 00:17:05.730 Suppressions used: 00:17:05.730 count bytes template 00:17:05.730 6 48 /usr/src/fio/parse.c 00:17:05.730 3296 316416 /usr/src/fio/iolog.c 00:17:05.730 1 8 libtcmalloc_minimal.so 00:17:05.730 1 904 libcrypto.so 00:17:05.730 ----------------------------------------------------- 00:17:05.730 00:17:05.730 00:17:05.730 real 0m12.577s 00:17:05.730 user 0m35.258s 00:17:05.730 sys 0m15.746s 00:17:05.730 15:12:43 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:05.730 ************************************ 00:17:05.730 END TEST bdev_fio_rw_verify 00:17:05.730 ************************************ 00:17:05.730 15:12:43 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:05.730 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:17:05.730 15:12:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:05.730 15:12:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:05.730 15:12:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:05.730 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:05.730 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:05.730 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:05.988 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:05.988 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:05.988 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:05.988 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:05.988 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:05.988 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:05.988 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:05.989 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:05.989 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:05.989 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:05.989 15:12:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:05.989 15:12:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "283c5867-893a-4384-ab97-ed4dbb625e1a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "283c5867-893a-4384-ab97-ed4dbb625e1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "6905dd5e-55ad-457a-bbdd-00284134ab40"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6905dd5e-55ad-457a-bbdd-00284134ab40",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e40de2ad-7ceb-4837-a4a6-5ea28b789e6d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e40de2ad-7ceb-4837-a4a6-5ea28b789e6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "9e044a54-fa0a-4a31-a8de-2995987bd493"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9e044a54-fa0a-4a31-a8de-2995987bd493",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "5bb585b7-fe8f-48a8-b535-98c68ba08951"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5bb585b7-fe8f-48a8-b535-98c68ba08951",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ce4b16ca-68de-46a8-bc23-e02e261ba7a1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ce4b16ca-68de-46a8-bc23-e02e261ba7a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:05.989 15:12:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:05.989 15:12:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:05.989 15:12:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:05.989 /home/vagrant/spdk_repo/spdk 00:17:05.989 15:12:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:05.989 15:12:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:05.989 00:17:05.989 real 0m12.781s 00:17:05.989 user 0m35.359s 00:17:05.989 sys 0m15.854s 00:17:05.989 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:05.989 15:12:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:05.989 ************************************ 00:17:05.989 END TEST bdev_fio 00:17:05.989 ************************************ 00:17:05.989 15:12:43 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:05.989 15:12:43 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:05.989 15:12:43 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:05.989 15:12:43 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:17:05.989 15:12:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.989 15:12:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:05.989 ************************************ 00:17:05.989 START TEST bdev_verify 00:17:05.989 ************************************ 00:17:05.989 15:12:43 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:05.989 [2024-07-15 15:12:44.053528] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:17:05.989 [2024-07-15 15:12:44.053732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77842 ] 00:17:06.247 [2024-07-15 15:12:44.233144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:06.506 [2024-07-15 15:12:44.470756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.506 [2024-07-15 15:12:44.471399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.072 Running I/O for 5 seconds... 00:17:12.341 00:17:12.341 Latency(us) 00:17:12.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.341 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:12.341 Verification LBA range: start 0x0 length 0xa0000 00:17:12.341 nvme0n1 : 5.05 1850.46 7.23 0.00 0.00 69047.88 10531.55 64105.08 00:17:12.341 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:12.341 Verification LBA range: start 0xa0000 length 0xa0000 00:17:12.341 nvme0n1 : 5.07 1765.89 6.90 0.00 0.00 72236.69 9558.53 66394.55 00:17:12.341 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:12.341 Verification LBA range: start 0x0 length 0xbd0bd 00:17:12.341 nvme1n1 : 5.06 2732.70 10.67 0.00 0.00 46626.69 5695.05 51970.91 00:17:12.341 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:12.341 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:12.341 nvme1n1 : 5.07 2823.24 11.03 0.00 0.00 44970.15 4264.13 59068.26 00:17:12.341 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:12.341 Verification LBA range: start 0x0 length 0x80000 00:17:12.342 nvme2n1 : 5.04 1929.01 7.54 0.00 0.00 66026.02 8013.14 60899.83 00:17:12.342 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:12.342 Verification LBA range: start 0x80000 length 0x80000 00:17:12.342 nvme2n1 : 5.07 1994.95 7.79 0.00 0.00 63567.98 6410.51 54947.21 00:17:12.342 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:12.342 Verification LBA range: start 0x0 length 0x80000 00:17:12.342 nvme2n2 : 5.06 1921.15 7.50 0.00 0.00 66203.15 10588.79 61357.72 00:17:12.342 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:12.342 Verification LBA range: start 0x80000 length 0x80000 00:17:12.342 nvme2n2 : 5.06 1973.45 7.71 0.00 0.00 64131.02 9043.40 56320.89 00:17:12.342 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:12.342 Verification LBA range: start 0x0 length 0x80000 00:17:12.342 nvme2n3 : 5.07 1919.26 7.50 0.00 0.00 66108.30 13450.62 61357.72 00:17:12.342 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:12.342 Verification LBA range: start 0x80000 length 0x80000 00:17:12.342 nvme2n3 : 5.07 1968.66 7.69 0.00 0.00 64232.68 13965.75 60899.83 00:17:12.342 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:12.342 Verification LBA range: start 0x0 length 0x20000 00:17:12.342 nvme3n1 : 5.07 1918.25 7.49 0.00 0.00 66017.73 5494.72 65478.76 00:17:12.342 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:12.342 Verification LBA range: start 0x20000 length 0x20000 00:17:12.342 nvme3n1 : 5.06 1971.86 7.70 0.00 0.00 64793.67 9501.29 58610.36 00:17:12.342 =================================================================================================================== 00:17:12.342 Total : 24768.87 96.75 0.00 0.00 61584.39 4264.13 66394.55 00:17:13.721 00:17:13.721 real 0m7.475s 00:17:13.721 user 0m11.641s 00:17:13.721 sys 0m1.862s 00:17:13.721 15:12:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:13.721 15:12:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:13.721 ************************************ 00:17:13.721 END TEST bdev_verify 00:17:13.721 ************************************ 00:17:13.721 15:12:51 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:13.721 15:12:51 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:13.721 15:12:51 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:17:13.721 15:12:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.721 15:12:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:13.721 ************************************ 00:17:13.721 START TEST bdev_verify_big_io 00:17:13.721 ************************************ 00:17:13.721 15:12:51 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:13.721 [2024-07-15 15:12:51.582702] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:17:13.721 [2024-07-15 15:12:51.583317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77950 ] 00:17:13.721 [2024-07-15 15:12:51.746121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:13.980 [2024-07-15 15:12:51.991487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.980 [2024-07-15 15:12:51.991528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.546 Running I/O for 5 seconds... 00:17:21.155 00:17:21.155 Latency(us) 00:17:21.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.155 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:21.155 Verification LBA range: start 0x0 length 0xa000 00:17:21.155 nvme0n1 : 5.83 140.04 8.75 0.00 0.00 881658.69 109436.53 923113.19 00:17:21.155 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:21.155 Verification LBA range: start 0xa000 length 0xa000 00:17:21.155 nvme0n1 : 5.75 155.72 9.73 0.00 0.00 796267.51 147441.69 934102.64 00:17:21.155 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:21.155 Verification LBA range: start 0x0 length 0xbd0b 00:17:21.155 nvme1n1 : 5.81 198.14 12.38 0.00 0.00 603382.51 11504.57 688671.75 00:17:21.155 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:21.155 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:21.155 nvme1n1 : 5.77 188.58 11.79 0.00 0.00 643810.87 15682.85 725303.22 00:17:21.155 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:21.155 Verification LBA range: start 0x0 length 0x8000 00:17:21.155 nvme2n1 : 5.80 154.59 9.66 0.00 0.00 754348.28 141946.97 1003702.44 00:17:21.155 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:21.155 Verification LBA range: start 0x8000 length 0x8000 00:17:21.155 nvme2n1 : 5.76 141.74 8.86 0.00 0.00 831298.19 14137.46 1326059.43 00:17:21.156 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:21.156 Verification LBA range: start 0x0 length 0x8000 00:17:21.156 nvme2n2 : 5.82 162.05 10.13 0.00 0.00 714270.08 76010.31 1098944.28 00:17:21.156 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:21.156 Verification LBA range: start 0x8000 length 0x8000 00:17:21.156 nvme2n2 : 5.76 127.79 7.99 0.00 0.00 899661.71 50368.28 2315109.28 00:17:21.156 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:21.156 Verification LBA range: start 0x0 length 0x8000 00:17:21.156 nvme2n3 : 5.83 143.19 8.95 0.00 0.00 791293.21 26672.29 2168583.38 00:17:21.156 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:21.156 Verification LBA range: start 0x8000 length 0x8000 00:17:21.156 nvme2n3 : 5.77 138.54 8.66 0.00 0.00 818403.84 51513.01 1963447.11 00:17:21.156 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:21.156 Verification LBA range: start 0x0 length 0x2000 00:17:21.156 nvme3n1 : 5.82 109.88 6.87 0.00 0.00 1004699.25 11619.05 2740034.40 00:17:21.156 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:21.156 Verification LBA range: start 0x2000 length 0x2000 00:17:21.156 nvme3n1 : 5.78 162.11 10.13 0.00 0.00 685052.31 12649.31 1443280.15 00:17:21.156 =================================================================================================================== 00:17:21.156 Total : 1822.36 113.90 0.00 0.00 769185.18 11504.57 2740034.40 00:17:22.092 00:17:22.092 real 0m8.637s 00:17:22.092 user 0m15.391s 00:17:22.092 sys 0m0.596s 00:17:22.092 15:13:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:22.092 15:13:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:22.092 ************************************ 00:17:22.092 END TEST bdev_verify_big_io 00:17:22.092 ************************************ 00:17:22.092 15:13:00 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:22.092 15:13:00 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:22.092 15:13:00 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:22.092 15:13:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.092 15:13:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:22.092 ************************************ 00:17:22.092 START TEST bdev_write_zeroes 00:17:22.092 ************************************ 00:17:22.092 15:13:00 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:22.351 [2024-07-15 15:13:00.284277] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:17:22.351 [2024-07-15 15:13:00.284436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78066 ] 00:17:22.610 [2024-07-15 15:13:00.463245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.610 [2024-07-15 15:13:00.702673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.179 Running I/O for 1 seconds... 00:17:24.556 00:17:24.556 Latency(us) 00:17:24.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.556 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:24.556 nvme0n1 : 1.01 13727.05 53.62 0.00 0.00 9315.91 7555.24 23009.15 00:17:24.556 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:24.556 nvme1n1 : 1.01 16878.64 65.93 0.00 0.00 7571.12 3419.89 14080.22 00:17:24.556 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:24.556 nvme2n1 : 1.01 13663.91 53.37 0.00 0.00 9300.30 5065.45 20719.68 00:17:24.556 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:24.556 nvme2n2 : 1.02 13709.95 53.55 0.00 0.00 9264.85 4121.04 22207.83 00:17:24.556 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:24.556 nvme2n3 : 1.02 13696.17 53.50 0.00 0.00 9267.80 4264.13 23467.04 00:17:24.556 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:24.556 nvme3n1 : 1.02 13683.29 53.45 0.00 0.00 9269.01 4493.08 24497.30 00:17:24.556 =================================================================================================================== 00:17:24.556 Total : 85359.01 333.43 0.00 0.00 8946.08 3419.89 24497.30 00:17:25.494 00:17:25.494 real 0m3.388s 00:17:25.494 user 0m2.585s 00:17:25.494 sys 0m0.644s 00:17:25.494 15:13:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:25.494 15:13:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:25.494 ************************************ 00:17:25.494 END TEST bdev_write_zeroes 00:17:25.494 ************************************ 00:17:25.753 15:13:03 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:25.753 15:13:03 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:25.753 15:13:03 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:25.753 15:13:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.753 15:13:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:25.753 ************************************ 00:17:25.753 START TEST bdev_json_nonenclosed 00:17:25.753 ************************************ 00:17:25.753 15:13:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:25.753 [2024-07-15 15:13:03.728113] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:17:25.753 [2024-07-15 15:13:03.728238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78125 ] 00:17:26.011 [2024-07-15 15:13:03.892257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.270 [2024-07-15 15:13:04.124129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.270 [2024-07-15 15:13:04.124228] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:26.270 [2024-07-15 15:13:04.124246] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:26.270 [2024-07-15 15:13:04.124259] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:26.529 00:17:26.529 real 0m0.931s 00:17:26.529 user 0m0.709s 00:17:26.529 sys 0m0.116s 00:17:26.529 15:13:04 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:17:26.529 15:13:04 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:26.529 15:13:04 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:26.529 ************************************ 00:17:26.529 END TEST bdev_json_nonenclosed 00:17:26.529 ************************************ 00:17:26.529 15:13:04 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:17:26.529 15:13:04 blockdev_xnvme -- bdev/blockdev.sh@781 -- # true 00:17:26.529 15:13:04 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:26.529 15:13:04 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:26.529 15:13:04 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:26.529 15:13:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:26.529 ************************************ 00:17:26.529 START TEST bdev_json_nonarray 00:17:26.529 ************************************ 00:17:26.529 15:13:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:26.788 [2024-07-15 15:13:04.713704] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:17:26.788 [2024-07-15 15:13:04.713833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78156 ] 00:17:26.788 [2024-07-15 15:13:04.879963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.047 [2024-07-15 15:13:05.118844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.047 [2024-07-15 15:13:05.118960] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:27.047 [2024-07-15 15:13:05.118977] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:27.047 [2024-07-15 15:13:05.118989] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:27.652 00:17:27.652 real 0m0.951s 00:17:27.652 user 0m0.713s 00:17:27.652 sys 0m0.131s 00:17:27.652 15:13:05 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:17:27.652 15:13:05 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:27.652 15:13:05 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:27.652 ************************************ 00:17:27.652 END TEST bdev_json_nonarray 00:17:27.652 ************************************ 00:17:27.652 15:13:05 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:17:27.652 15:13:05 blockdev_xnvme -- bdev/blockdev.sh@784 -- # true 00:17:27.652 15:13:05 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:17:27.652 15:13:05 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:17:27.652 15:13:05 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:17:27.652 15:13:05 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:27.652 15:13:05 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:17:27.652 15:13:05 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:27.652 15:13:05 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:27.652 15:13:05 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:17:27.652 15:13:05 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:17:27.652 15:13:05 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:17:27.652 15:13:05 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:17:27.652 15:13:05 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:28.221 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:50.142 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:50.142 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:50.142 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:55.420 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:55.420 00:17:55.420 real 1m28.618s 00:17:55.420 user 1m43.965s 00:17:55.420 sys 1m50.012s 00:17:55.420 15:13:33 blockdev_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:55.420 15:13:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:55.420 ************************************ 00:17:55.420 END TEST blockdev_xnvme 00:17:55.420 ************************************ 00:17:55.420 15:13:33 -- common/autotest_common.sh@1142 -- # return 0 00:17:55.420 15:13:33 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:55.420 15:13:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:55.420 15:13:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.420 15:13:33 -- common/autotest_common.sh@10 -- # set +x 00:17:55.420 ************************************ 00:17:55.420 START TEST ublk 00:17:55.420 ************************************ 00:17:55.420 15:13:33 ublk -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:55.420 * Looking for test storage... 00:17:55.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:55.420 15:13:33 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:55.420 15:13:33 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:55.420 15:13:33 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:55.420 15:13:33 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:55.420 15:13:33 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:55.420 15:13:33 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:55.420 15:13:33 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:55.420 15:13:33 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:55.420 15:13:33 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:55.420 15:13:33 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:17:55.420 15:13:33 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:17:55.420 15:13:33 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:17:55.420 15:13:33 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:17:55.420 15:13:33 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:17:55.420 15:13:33 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:17:55.420 15:13:33 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:17:55.420 15:13:33 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:17:55.420 15:13:33 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:17:55.420 15:13:33 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:17:55.420 15:13:33 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:17:55.420 15:13:33 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:55.420 15:13:33 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.420 15:13:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:55.420 ************************************ 00:17:55.420 START TEST test_save_ublk_config 00:17:55.420 ************************************ 00:17:55.420 15:13:33 ublk.test_save_ublk_config -- common/autotest_common.sh@1123 -- # test_save_config 00:17:55.420 15:13:33 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:17:55.420 15:13:33 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=78476 00:17:55.420 15:13:33 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:17:55.420 15:13:33 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:17:55.420 15:13:33 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 78476 00:17:55.420 15:13:33 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 78476 ']' 00:17:55.420 15:13:33 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.420 15:13:33 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.420 15:13:33 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.420 15:13:33 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.420 15:13:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:55.420 [2024-07-15 15:13:33.382106] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:17:55.420 [2024-07-15 15:13:33.382225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78476 ] 00:17:55.677 [2024-07-15 15:13:33.543702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.935 [2024-07-15 15:13:33.800950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.874 15:13:34 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.874 15:13:34 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:17:56.874 15:13:34 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:17:56.874 15:13:34 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:17:56.874 15:13:34 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.874 15:13:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:56.874 [2024-07-15 15:13:34.762029] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:56.874 [2024-07-15 15:13:34.763389] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:56.874 malloc0 00:17:56.874 [2024-07-15 15:13:34.853214] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:56.874 [2024-07-15 15:13:34.853312] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:56.874 [2024-07-15 15:13:34.853324] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:56.874 [2024-07-15 15:13:34.853336] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:56.874 [2024-07-15 15:13:34.861045] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:56.874 [2024-07-15 15:13:34.861080] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:56.874 [2024-07-15 15:13:34.869032] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:56.874 [2024-07-15 15:13:34.869172] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:56.874 [2024-07-15 15:13:34.893023] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:56.874 0 00:17:56.874 15:13:34 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.874 15:13:34 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:17:56.874 15:13:34 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.874 15:13:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:57.134 15:13:35 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.134 15:13:35 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:17:57.134 "subsystems": [ 00:17:57.134 { 00:17:57.134 "subsystem": "keyring", 00:17:57.134 "config": [] 00:17:57.134 }, 00:17:57.134 { 00:17:57.134 "subsystem": "iobuf", 00:17:57.134 "config": [ 00:17:57.135 { 00:17:57.135 "method": "iobuf_set_options", 00:17:57.135 "params": { 00:17:57.135 "small_pool_count": 8192, 00:17:57.135 "large_pool_count": 1024, 00:17:57.135 "small_bufsize": 8192, 00:17:57.135 "large_bufsize": 135168 00:17:57.135 } 00:17:57.135 } 00:17:57.135 ] 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "subsystem": "sock", 00:17:57.135 "config": [ 00:17:57.135 { 00:17:57.135 "method": "sock_set_default_impl", 00:17:57.135 "params": { 00:17:57.135 "impl_name": "posix" 00:17:57.135 } 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "method": "sock_impl_set_options", 00:17:57.135 "params": { 00:17:57.135 "impl_name": "ssl", 00:17:57.135 "recv_buf_size": 4096, 00:17:57.135 "send_buf_size": 4096, 00:17:57.135 "enable_recv_pipe": true, 00:17:57.135 "enable_quickack": false, 00:17:57.135 "enable_placement_id": 0, 00:17:57.135 "enable_zerocopy_send_server": true, 00:17:57.135 "enable_zerocopy_send_client": false, 00:17:57.135 "zerocopy_threshold": 0, 00:17:57.135 "tls_version": 0, 00:17:57.135 "enable_ktls": false 00:17:57.135 } 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "method": "sock_impl_set_options", 00:17:57.135 "params": { 00:17:57.135 "impl_name": "posix", 00:17:57.135 "recv_buf_size": 2097152, 00:17:57.135 "send_buf_size": 2097152, 00:17:57.135 "enable_recv_pipe": true, 00:17:57.135 "enable_quickack": false, 00:17:57.135 "enable_placement_id": 0, 00:17:57.135 "enable_zerocopy_send_server": true, 00:17:57.135 "enable_zerocopy_send_client": false, 00:17:57.135 "zerocopy_threshold": 0, 00:17:57.135 "tls_version": 0, 00:17:57.135 "enable_ktls": false 00:17:57.135 } 00:17:57.135 } 00:17:57.135 ] 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "subsystem": "vmd", 00:17:57.135 "config": [] 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "subsystem": "accel", 00:17:57.135 "config": [ 00:17:57.135 { 00:17:57.135 "method": "accel_set_options", 00:17:57.135 "params": { 00:17:57.135 "small_cache_size": 128, 00:17:57.135 "large_cache_size": 16, 00:17:57.135 "task_count": 2048, 00:17:57.135 "sequence_count": 2048, 00:17:57.135 "buf_count": 2048 00:17:57.135 } 00:17:57.135 } 00:17:57.135 ] 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "subsystem": "bdev", 00:17:57.135 "config": [ 00:17:57.135 { 00:17:57.135 "method": "bdev_set_options", 00:17:57.135 "params": { 00:17:57.135 "bdev_io_pool_size": 65535, 00:17:57.135 "bdev_io_cache_size": 256, 00:17:57.135 "bdev_auto_examine": true, 00:17:57.135 "iobuf_small_cache_size": 128, 00:17:57.135 "iobuf_large_cache_size": 16 00:17:57.135 } 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "method": "bdev_raid_set_options", 00:17:57.135 "params": { 00:17:57.135 "process_window_size_kb": 1024 00:17:57.135 } 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "method": "bdev_iscsi_set_options", 00:17:57.135 "params": { 00:17:57.135 "timeout_sec": 30 00:17:57.135 } 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "method": "bdev_nvme_set_options", 00:17:57.135 "params": { 00:17:57.135 "action_on_timeout": "none", 00:17:57.135 "timeout_us": 0, 00:17:57.135 "timeout_admin_us": 0, 00:17:57.135 "keep_alive_timeout_ms": 10000, 00:17:57.135 "arbitration_burst": 0, 00:17:57.135 "low_priority_weight": 0, 00:17:57.135 "medium_priority_weight": 0, 00:17:57.135 "high_priority_weight": 0, 00:17:57.135 "nvme_adminq_poll_period_us": 10000, 00:17:57.135 "nvme_ioq_poll_period_us": 0, 00:17:57.135 "io_queue_requests": 0, 00:17:57.135 "delay_cmd_submit": true, 00:17:57.135 "transport_retry_count": 4, 00:17:57.135 "bdev_retry_count": 3, 00:17:57.135 "transport_ack_timeout": 0, 00:17:57.135 "ctrlr_loss_timeout_sec": 0, 00:17:57.135 "reconnect_delay_sec": 0, 00:17:57.135 "fast_io_fail_timeout_sec": 0, 00:17:57.135 "disable_auto_failback": false, 00:17:57.135 "generate_uuids": false, 00:17:57.135 "transport_tos": 0, 00:17:57.135 "nvme_error_stat": false, 00:17:57.135 "rdma_srq_size": 0, 00:17:57.135 "io_path_stat": false, 00:17:57.135 "allow_accel_sequence": false, 00:17:57.135 "rdma_max_cq_size": 0, 00:17:57.135 "rdma_cm_event_timeout_ms": 0, 00:17:57.135 "dhchap_digests": [ 00:17:57.135 "sha256", 00:17:57.135 "sha384", 00:17:57.135 "sha512" 00:17:57.135 ], 00:17:57.135 "dhchap_dhgroups": [ 00:17:57.135 "null", 00:17:57.135 "ffdhe2048", 00:17:57.135 "ffdhe3072", 00:17:57.135 "ffdhe4096", 00:17:57.135 "ffdhe6144", 00:17:57.135 "ffdhe8192" 00:17:57.135 ] 00:17:57.135 } 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "method": "bdev_nvme_set_hotplug", 00:17:57.135 "params": { 00:17:57.135 "period_us": 100000, 00:17:57.135 "enable": false 00:17:57.135 } 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "method": "bdev_malloc_create", 00:17:57.135 "params": { 00:17:57.135 "name": "malloc0", 00:17:57.135 "num_blocks": 8192, 00:17:57.135 "block_size": 4096, 00:17:57.135 "physical_block_size": 4096, 00:17:57.135 "uuid": "f415ef82-3898-403c-822e-458312925556", 00:17:57.135 "optimal_io_boundary": 0 00:17:57.135 } 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "method": "bdev_wait_for_examine" 00:17:57.135 } 00:17:57.135 ] 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "subsystem": "scsi", 00:17:57.135 "config": null 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "subsystem": "scheduler", 00:17:57.135 "config": [ 00:17:57.135 { 00:17:57.135 "method": "framework_set_scheduler", 00:17:57.135 "params": { 00:17:57.135 "name": "static" 00:17:57.135 } 00:17:57.135 } 00:17:57.135 ] 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "subsystem": "vhost_scsi", 00:17:57.135 "config": [] 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "subsystem": "vhost_blk", 00:17:57.135 "config": [] 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "subsystem": "ublk", 00:17:57.135 "config": [ 00:17:57.135 { 00:17:57.135 "method": "ublk_create_target", 00:17:57.135 "params": { 00:17:57.135 "cpumask": "1" 00:17:57.135 } 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "method": "ublk_start_disk", 00:17:57.135 "params": { 00:17:57.135 "bdev_name": "malloc0", 00:17:57.135 "ublk_id": 0, 00:17:57.135 "num_queues": 1, 00:17:57.135 "queue_depth": 128 00:17:57.135 } 00:17:57.135 } 00:17:57.135 ] 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "subsystem": "nbd", 00:17:57.135 "config": [] 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "subsystem": "nvmf", 00:17:57.135 "config": [ 00:17:57.135 { 00:17:57.135 "method": "nvmf_set_config", 00:17:57.135 "params": { 00:17:57.135 "discovery_filter": "match_any", 00:17:57.135 "admin_cmd_passthru": { 00:17:57.135 "identify_ctrlr": false 00:17:57.135 } 00:17:57.135 } 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "method": "nvmf_set_max_subsystems", 00:17:57.135 "params": { 00:17:57.135 "max_subsystems": 1024 00:17:57.135 } 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "method": "nvmf_set_crdt", 00:17:57.135 "params": { 00:17:57.135 "crdt1": 0, 00:17:57.135 "crdt2": 0, 00:17:57.135 "crdt3": 0 00:17:57.135 } 00:17:57.135 } 00:17:57.135 ] 00:17:57.135 }, 00:17:57.135 { 00:17:57.135 "subsystem": "iscsi", 00:17:57.135 "config": [ 00:17:57.135 { 00:17:57.135 "method": "iscsi_set_options", 00:17:57.135 "params": { 00:17:57.135 "node_base": "iqn.2016-06.io.spdk", 00:17:57.135 "max_sessions": 128, 00:17:57.135 "max_connections_per_session": 2, 00:17:57.135 "max_queue_depth": 64, 00:17:57.135 "default_time2wait": 2, 00:17:57.135 "default_time2retain": 20, 00:17:57.135 "first_burst_length": 8192, 00:17:57.135 "immediate_data": true, 00:17:57.135 "allow_duplicated_isid": false, 00:17:57.135 "error_recovery_level": 0, 00:17:57.135 "nop_timeout": 60, 00:17:57.135 "nop_in_interval": 30, 00:17:57.135 "disable_chap": false, 00:17:57.135 "require_chap": false, 00:17:57.135 "mutual_chap": false, 00:17:57.135 "chap_group": 0, 00:17:57.135 "max_large_datain_per_connection": 64, 00:17:57.135 "max_r2t_per_connection": 4, 00:17:57.135 "pdu_pool_size": 36864, 00:17:57.135 "immediate_data_pool_size": 16384, 00:17:57.135 "data_out_pool_size": 2048 00:17:57.135 } 00:17:57.135 } 00:17:57.135 ] 00:17:57.135 } 00:17:57.135 ] 00:17:57.135 }' 00:17:57.135 15:13:35 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 78476 00:17:57.135 15:13:35 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 78476 ']' 00:17:57.135 15:13:35 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 78476 00:17:57.135 15:13:35 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:17:57.135 15:13:35 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:57.135 15:13:35 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78476 00:17:57.135 15:13:35 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:57.135 15:13:35 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:57.136 killing process with pid 78476 00:17:57.136 15:13:35 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78476' 00:17:57.136 15:13:35 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 78476 00:17:57.136 15:13:35 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 78476 00:17:59.054 [2024-07-15 15:13:36.711756] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:59.054 [2024-07-15 15:13:36.751033] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:59.054 [2024-07-15 15:13:36.751257] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:59.054 [2024-07-15 15:13:36.761055] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:59.054 [2024-07-15 15:13:36.761145] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:59.054 [2024-07-15 15:13:36.761156] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:59.054 [2024-07-15 15:13:36.761185] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:59.054 [2024-07-15 15:13:36.761370] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:00.429 15:13:38 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=78542 00:18:00.429 15:13:38 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 78542 00:18:00.429 15:13:38 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 78542 ']' 00:18:00.429 15:13:38 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.429 15:13:38 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:18:00.429 15:13:38 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.429 15:13:38 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.429 15:13:38 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.429 15:13:38 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:18:00.429 "subsystems": [ 00:18:00.429 { 00:18:00.430 "subsystem": "keyring", 00:18:00.430 "config": [] 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "subsystem": "iobuf", 00:18:00.430 "config": [ 00:18:00.430 { 00:18:00.430 "method": "iobuf_set_options", 00:18:00.430 "params": { 00:18:00.430 "small_pool_count": 8192, 00:18:00.430 "large_pool_count": 1024, 00:18:00.430 "small_bufsize": 8192, 00:18:00.430 "large_bufsize": 135168 00:18:00.430 } 00:18:00.430 } 00:18:00.430 ] 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "subsystem": "sock", 00:18:00.430 "config": [ 00:18:00.430 { 00:18:00.430 "method": "sock_set_default_impl", 00:18:00.430 "params": { 00:18:00.430 "impl_name": "posix" 00:18:00.430 } 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "method": "sock_impl_set_options", 00:18:00.430 "params": { 00:18:00.430 "impl_name": "ssl", 00:18:00.430 "recv_buf_size": 4096, 00:18:00.430 "send_buf_size": 4096, 00:18:00.430 "enable_recv_pipe": true, 00:18:00.430 "enable_quickack": false, 00:18:00.430 "enable_placement_id": 0, 00:18:00.430 "enable_zerocopy_send_server": true, 00:18:00.430 "enable_zerocopy_send_client": false, 00:18:00.430 "zerocopy_threshold": 0, 00:18:00.430 "tls_version": 0, 00:18:00.430 "enable_ktls": false 00:18:00.430 } 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "method": "sock_impl_set_options", 00:18:00.430 "params": { 00:18:00.430 "impl_name": "posix", 00:18:00.430 "recv_buf_size": 2097152, 00:18:00.430 "send_buf_size": 2097152, 00:18:00.430 "enable_recv_pipe": true, 00:18:00.430 "enable_quickack": false, 00:18:00.430 "enable_placement_id": 0, 00:18:00.430 "enable_zerocopy_send_server": true, 00:18:00.430 "enable_zerocopy_send_client": false, 00:18:00.430 "zerocopy_threshold": 0, 00:18:00.430 "tls_version": 0, 00:18:00.430 "enable_ktls": false 00:18:00.430 } 00:18:00.430 } 00:18:00.430 ] 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "subsystem": "vmd", 00:18:00.430 "config": [] 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "subsystem": "accel", 00:18:00.430 "config": [ 00:18:00.430 { 00:18:00.430 "method": "accel_set_options", 00:18:00.430 "params": { 00:18:00.430 "small_cache_size": 128, 00:18:00.430 "large_cache_size": 16, 00:18:00.430 "task_count": 2048, 00:18:00.430 "sequence_count": 2048, 00:18:00.430 "buf_count": 2048 00:18:00.430 } 00:18:00.430 } 00:18:00.430 ] 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "subsystem": "bdev", 00:18:00.430 "config": [ 00:18:00.430 { 00:18:00.430 "method": "bdev_set_options", 00:18:00.430 "params": { 00:18:00.430 "bdev_io_pool_size": 65535, 00:18:00.430 "bdev_io_cache_size": 256, 00:18:00.430 "bdev_auto_examine": true, 00:18:00.430 "iobuf_small_cache_size": 128, 00:18:00.430 "iobuf_large_cache_size": 16 00:18:00.430 } 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "method": "bdev_raid_set_options", 00:18:00.430 "params": { 00:18:00.430 "process_window_size_kb": 1024 00:18:00.430 } 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "method": "bdev_iscsi_set_options", 00:18:00.430 "params": { 00:18:00.430 "timeout_sec": 30 00:18:00.430 } 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "method": "bdev_nvme_set_options", 00:18:00.430 "params": { 00:18:00.430 "action_on_timeout": "none", 00:18:00.430 "timeout_us": 0, 00:18:00.430 "timeout_admin_us": 0, 00:18:00.430 "keep_alive_timeout_ms": 10000, 00:18:00.430 "arbitration_burst": 0, 00:18:00.430 "low_priority_weight": 0, 00:18:00.430 "medium_priority_weight": 0, 00:18:00.430 "high_priority_weight": 0, 00:18:00.430 "nvme_adminq_poll_period_us": 10000, 00:18:00.430 "nvme_ioq_poll_period_us": 0, 00:18:00.430 "io_queue_requests": 0, 00:18:00.430 "delay_cmd_submit": true, 00:18:00.430 "transport_retry_count": 4, 00:18:00.430 "bdev_retry_count": 3, 00:18:00.430 "transport_ack_timeout": 0, 00:18:00.430 "ctrlr_loss_timeout_sec": 0, 00:18:00.430 "reconnect_delay_sec": 0, 00:18:00.430 "fast_io_fail_timeout_sec": 0, 00:18:00.430 "disable_auto_failback": false, 00:18:00.430 "generate_uuids": false, 00:18:00.430 "transport_tos": 0, 00:18:00.430 "nvme_error_stat": false, 00:18:00.430 "rdma_srq_size": 0, 00:18:00.430 "io_path_stat": false, 00:18:00.430 "allow_accel_sequence": false, 00:18:00.430 "rdma_max_cq_size": 0, 00:18:00.430 "rdma_cm_event_timeout_ms": 0, 00:18:00.430 "dhchap_digests": [ 00:18:00.430 "sha256", 00:18:00.430 "sha384", 00:18:00.430 "sha512" 00:18:00.430 ], 00:18:00.430 "dhchap_dhgroups": [ 00:18:00.430 "null", 00:18:00.430 "ffdhe2048", 00:18:00.430 "ffdhe3072", 00:18:00.430 "ffdhe4096", 00:18:00.430 "ffdhe6144", 00:18:00.430 "ffdhe8192" 00:18:00.430 ] 00:18:00.430 } 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "method": "bdev_nvme_set_hotplug", 00:18:00.430 "params": { 00:18:00.430 "period_us": 100000, 00:18:00.430 "enable": false 00:18:00.430 } 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "method": "bdev_malloc_create", 00:18:00.430 "params": { 00:18:00.430 "name": "malloc0", 00:18:00.430 "num_blocks": 8192, 00:18:00.430 "block_size": 4096, 00:18:00.430 "physical_block_size": 4096, 00:18:00.430 "uuid": "f415ef82-3898-403c-822e-458312925556", 00:18:00.430 "optimal_io_boundary": 0 00:18:00.430 } 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "method": "bdev_wait_for_examine" 00:18:00.430 } 00:18:00.430 ] 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "subsystem": "scsi", 00:18:00.430 "config": null 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "subsystem": "scheduler", 00:18:00.430 "config": [ 00:18:00.430 { 00:18:00.430 "method": "framework_set_scheduler", 00:18:00.430 "params": { 00:18:00.430 "name": "static" 00:18:00.430 } 00:18:00.430 } 00:18:00.430 ] 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "subsystem": "vhost_scsi", 00:18:00.430 "config": [] 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "subsystem": "vhost_blk", 00:18:00.430 "config": [] 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "subsystem": "ublk", 00:18:00.430 "config": [ 00:18:00.430 { 00:18:00.430 "method": "ublk_create_target", 00:18:00.430 "params": { 00:18:00.430 "cpumask": "1" 00:18:00.430 } 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "method": "ublk_start_disk", 00:18:00.430 "params": { 00:18:00.430 "bdev_name": "malloc0", 00:18:00.430 "ublk_id": 0, 00:18:00.430 "num_queues": 1, 00:18:00.430 "queue_depth": 128 00:18:00.430 } 00:18:00.430 } 00:18:00.430 ] 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "subsystem": "nbd", 00:18:00.430 "config": [] 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "subsystem": "nvmf", 00:18:00.430 "config": [ 00:18:00.430 { 00:18:00.430 "method": "nvmf_set_config", 00:18:00.430 "params": { 00:18:00.430 "discovery_filter": "match_any", 00:18:00.430 "admin_cmd_passthru": { 00:18:00.430 "identify_ctrlr": false 00:18:00.430 } 00:18:00.430 } 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "method": "nvmf_set_max_subsystems", 00:18:00.430 "params": { 00:18:00.430 "max_subsystems": 1024 00:18:00.430 } 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "method": "nvmf_set_crdt", 00:18:00.430 "params": { 00:18:00.430 "crdt1": 0, 00:18:00.430 "crdt2": 0, 00:18:00.430 "crdt3": 0 00:18:00.430 } 00:18:00.430 } 00:18:00.430 ] 00:18:00.430 }, 00:18:00.430 { 00:18:00.430 "subsystem": "iscsi", 00:18:00.430 "config": [ 00:18:00.430 { 00:18:00.430 "method": "iscsi_set_options", 00:18:00.430 "params": { 00:18:00.430 "node_base": "iqn.2016-06.io.spdk", 00:18:00.430 "max_sessions": 128, 00:18:00.430 "max_connections_per_session": 2, 00:18:00.430 "max_queue_depth": 64, 00:18:00.430 "default_time2wait": 2, 00:18:00.430 "default_time2retain": 20, 00:18:00.430 "first_burst_length": 8192, 00:18:00.430 "immediate_data": true, 00:18:00.430 "allow_duplicated_isid": false, 00:18:00.430 "error_recovery_level": 0, 00:18:00.430 "nop_timeout": 60, 00:18:00.430 "nop_in_interval": 30, 00:18:00.430 "disable_chap": false, 00:18:00.430 "require_chap": false, 00:18:00.430 "mutual_chap": false, 00:18:00.430 "chap_group": 0, 00:18:00.430 "max_large_datain_per_connection": 64, 00:18:00.430 "max_r2t_per_connection": 4, 00:18:00.430 "pdu_pool_size": 36864, 00:18:00.430 "immediate_data_pool_size": 16384, 00:18:00.430 "data_out_pool_size": 2048 00:18:00.430 } 00:18:00.430 } 00:18:00.430 ] 00:18:00.430 } 00:18:00.430 ] 00:18:00.430 }' 00:18:00.430 15:13:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:00.430 [2024-07-15 15:13:38.468873] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:18:00.430 [2024-07-15 15:13:38.469035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78542 ] 00:18:00.690 [2024-07-15 15:13:38.638317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.949 [2024-07-15 15:13:38.918608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.329 [2024-07-15 15:13:40.049007] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:02.329 [2024-07-15 15:13:40.050347] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:02.329 [2024-07-15 15:13:40.057157] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:02.329 [2024-07-15 15:13:40.057253] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:02.329 [2024-07-15 15:13:40.057265] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:02.329 [2024-07-15 15:13:40.057274] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:02.329 [2024-07-15 15:13:40.065166] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:02.329 [2024-07-15 15:13:40.065191] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:02.329 [2024-07-15 15:13:40.073030] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:02.329 [2024-07-15 15:13:40.073145] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:02.329 [2024-07-15 15:13:40.090027] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 78542 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 78542 ']' 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 78542 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78542 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:02.329 killing process with pid 78542 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78542' 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 78542 00:18:02.329 15:13:40 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 78542 00:18:04.237 [2024-07-15 15:13:41.840511] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:04.237 [2024-07-15 15:13:41.875068] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:04.237 [2024-07-15 15:13:41.875269] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:04.237 [2024-07-15 15:13:41.890055] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:04.237 [2024-07-15 15:13:41.890117] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:04.237 [2024-07-15 15:13:41.890125] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:04.237 [2024-07-15 15:13:41.890153] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:04.237 [2024-07-15 15:13:41.890359] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:05.652 15:13:43 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:18:05.652 00:18:05.652 real 0m10.191s 00:18:05.652 user 0m8.823s 00:18:05.652 sys 0m2.095s 00:18:05.652 15:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:05.652 15:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:05.652 ************************************ 00:18:05.652 END TEST test_save_ublk_config 00:18:05.652 ************************************ 00:18:05.652 15:13:43 ublk -- common/autotest_common.sh@1142 -- # return 0 00:18:05.652 15:13:43 ublk -- ublk/ublk.sh@139 -- # spdk_pid=78634 00:18:05.652 15:13:43 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:05.652 15:13:43 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:05.652 15:13:43 ublk -- ublk/ublk.sh@141 -- # waitforlisten 78634 00:18:05.652 15:13:43 ublk -- common/autotest_common.sh@829 -- # '[' -z 78634 ']' 00:18:05.652 15:13:43 ublk -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.652 15:13:43 ublk -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.652 15:13:43 ublk -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.652 15:13:43 ublk -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.652 15:13:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:05.652 [2024-07-15 15:13:43.606115] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:18:05.652 [2024-07-15 15:13:43.606254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78634 ] 00:18:05.652 [2024-07-15 15:13:43.761231] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:05.912 [2024-07-15 15:13:44.006695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.912 [2024-07-15 15:13:44.006734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.328 15:13:44 ublk -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.328 15:13:44 ublk -- common/autotest_common.sh@862 -- # return 0 00:18:07.328 15:13:44 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:18:07.328 15:13:44 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:07.328 15:13:44 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:07.328 15:13:44 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:07.328 ************************************ 00:18:07.328 START TEST test_create_ublk 00:18:07.328 ************************************ 00:18:07.328 15:13:44 ublk.test_create_ublk -- common/autotest_common.sh@1123 -- # test_create_ublk 00:18:07.328 15:13:44 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:18:07.328 15:13:44 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.328 15:13:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:07.328 [2024-07-15 15:13:44.993014] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:07.328 [2024-07-15 15:13:44.996207] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:07.328 15:13:44 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.328 15:13:44 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:18:07.328 15:13:44 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:18:07.328 15:13:44 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.328 15:13:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:07.328 15:13:45 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.328 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:18:07.328 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:07.328 15:13:45 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.328 15:13:45 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:07.328 [2024-07-15 15:13:45.335166] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:07.328 [2024-07-15 15:13:45.335542] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:07.328 [2024-07-15 15:13:45.335560] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:07.328 [2024-07-15 15:13:45.335587] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:07.328 [2024-07-15 15:13:45.343036] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:07.328 [2024-07-15 15:13:45.343085] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:07.328 [2024-07-15 15:13:45.354016] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:07.328 [2024-07-15 15:13:45.366196] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:07.328 [2024-07-15 15:13:45.381030] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:07.328 15:13:45 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.328 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:18:07.328 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:18:07.328 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:18:07.328 15:13:45 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.328 15:13:45 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:07.603 15:13:45 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.603 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:18:07.603 { 00:18:07.603 "ublk_device": "/dev/ublkb0", 00:18:07.603 "id": 0, 00:18:07.603 "queue_depth": 512, 00:18:07.603 "num_queues": 4, 00:18:07.603 "bdev_name": "Malloc0" 00:18:07.603 } 00:18:07.603 ]' 00:18:07.603 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:18:07.603 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:07.603 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:18:07.603 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:18:07.603 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:18:07.603 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:18:07.603 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:18:07.603 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:18:07.603 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:18:07.603 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:07.603 15:13:45 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:18:07.603 15:13:45 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:18:07.603 15:13:45 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:18:07.603 15:13:45 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:18:07.603 15:13:45 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:18:07.603 15:13:45 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:18:07.603 15:13:45 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:18:07.603 15:13:45 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:18:07.603 15:13:45 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:18:07.603 15:13:45 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:07.603 15:13:45 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:07.603 15:13:45 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:18:07.862 fio: verification read phase will never start because write phase uses all of runtime 00:18:07.862 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:18:07.862 fio-3.35 00:18:07.862 Starting 1 process 00:18:17.845 00:18:17.845 fio_test: (groupid=0, jobs=1): err= 0: pid=78688: Mon Jul 15 15:13:55 2024 00:18:17.845 write: IOPS=15.2k, BW=59.4MiB/s (62.3MB/s)(594MiB/10001msec); 0 zone resets 00:18:17.845 clat (usec): min=34, max=4227, avg=64.77, stdev=98.09 00:18:17.845 lat (usec): min=34, max=4240, avg=65.27, stdev=98.10 00:18:17.845 clat percentiles (usec): 00:18:17.846 | 1.00th=[ 42], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 55], 00:18:17.846 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 62], 00:18:17.846 | 70.00th=[ 64], 80.00th=[ 67], 90.00th=[ 71], 95.00th=[ 76], 00:18:17.846 | 99.00th=[ 89], 99.50th=[ 97], 99.90th=[ 2008], 99.95th=[ 2835], 00:18:17.846 | 99.99th=[ 3523] 00:18:17.846 bw ( KiB/s): min=54704, max=69496, per=100.00%, avg=61192.42, stdev=3654.04, samples=19 00:18:17.846 iops : min=13676, max=17374, avg=15298.11, stdev=913.51, samples=19 00:18:17.846 lat (usec) : 50=4.17%, 100=95.39%, 250=0.26%, 500=0.01%, 750=0.01% 00:18:17.846 lat (usec) : 1000=0.01% 00:18:17.846 lat (msec) : 2=0.05%, 4=0.10%, 10=0.01% 00:18:17.846 cpu : usr=2.35%, sys=8.87%, ctx=152191, majf=0, minf=795 00:18:17.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.846 issued rwts: total=0,152189,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.846 00:18:17.846 Run status group 0 (all jobs): 00:18:17.846 WRITE: bw=59.4MiB/s (62.3MB/s), 59.4MiB/s-59.4MiB/s (62.3MB/s-62.3MB/s), io=594MiB (623MB), run=10001-10001msec 00:18:17.846 00:18:17.846 Disk stats (read/write): 00:18:17.846 ublkb0: ios=0/150645, merge=0/0, ticks=0/8829, in_queue=8830, util=99.12% 00:18:17.846 15:13:55 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:17.846 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.846 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.846 [2024-07-15 15:13:55.882722] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:17.846 [2024-07-15 15:13:55.927463] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:17.846 [2024-07-15 15:13:55.928927] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:17.846 [2024-07-15 15:13:55.935173] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:17.846 [2024-07-15 15:13:55.935514] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:17.846 [2024-07-15 15:13:55.935531] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:17.846 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.846 15:13:55 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:17.846 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:18:17.846 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:17.846 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:17.846 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.846 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:17.846 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.846 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:18:17.846 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.846 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.105 [2024-07-15 15:13:55.957178] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:18.105 request: 00:18:18.105 { 00:18:18.105 "ublk_id": 0, 00:18:18.105 "method": "ublk_stop_disk", 00:18:18.105 "req_id": 1 00:18:18.105 } 00:18:18.105 Got JSON-RPC error response 00:18:18.105 response: 00:18:18.105 { 00:18:18.105 "code": -19, 00:18:18.105 "message": "No such device" 00:18:18.105 } 00:18:18.105 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:18.105 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:18:18.105 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:18.105 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:18.105 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:18.105 15:13:55 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:18.105 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.105 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.105 [2024-07-15 15:13:55.973126] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:18.105 [2024-07-15 15:13:55.981039] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:18.105 [2024-07-15 15:13:55.981081] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:18.105 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.105 15:13:55 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:18.105 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.105 15:13:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.363 15:13:56 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.363 15:13:56 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:18.363 15:13:56 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:18.363 15:13:56 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.363 15:13:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.363 15:13:56 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.363 15:13:56 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:18.363 15:13:56 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:18.363 15:13:56 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:18.363 15:13:56 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:18.363 15:13:56 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.363 15:13:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.363 15:13:56 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.363 15:13:56 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:18.363 15:13:56 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:18.622 15:13:56 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:18.622 00:18:18.622 real 0m11.527s 00:18:18.622 user 0m0.645s 00:18:18.622 sys 0m0.990s 00:18:18.622 15:13:56 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:18.622 15:13:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.622 ************************************ 00:18:18.622 END TEST test_create_ublk 00:18:18.622 ************************************ 00:18:18.622 15:13:56 ublk -- common/autotest_common.sh@1142 -- # return 0 00:18:18.622 15:13:56 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:18.622 15:13:56 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:18.622 15:13:56 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:18.622 15:13:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.622 ************************************ 00:18:18.622 START TEST test_create_multi_ublk 00:18:18.622 ************************************ 00:18:18.622 15:13:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@1123 -- # test_create_multi_ublk 00:18:18.622 15:13:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:18.622 15:13:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.622 15:13:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.622 [2024-07-15 15:13:56.587031] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:18.622 [2024-07-15 15:13:56.590095] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:18.622 15:13:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.622 15:13:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:18.622 15:13:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:18.622 15:13:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:18.622 15:13:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:18.622 15:13:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.622 15:13:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.880 15:13:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.880 15:13:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:18.880 15:13:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:18.880 15:13:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.880 15:13:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.880 [2024-07-15 15:13:56.947206] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:18.880 [2024-07-15 15:13:56.947634] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:18.880 [2024-07-15 15:13:56.947689] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:18.880 [2024-07-15 15:13:56.947697] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:18.880 [2024-07-15 15:13:56.958077] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:18.880 [2024-07-15 15:13:56.958105] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:18.880 [2024-07-15 15:13:56.965055] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:18.880 [2024-07-15 15:13:56.965747] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:18.880 [2024-07-15 15:13:56.980086] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:18.880 15:13:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.880 15:13:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:18.880 15:13:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:18.880 15:13:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:18.880 15:13:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.880 15:13:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:19.454 15:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.454 15:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:19.454 15:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:19.454 15:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.454 15:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:19.454 [2024-07-15 15:13:57.324183] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:19.454 [2024-07-15 15:13:57.324614] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:19.454 [2024-07-15 15:13:57.324633] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:19.454 [2024-07-15 15:13:57.324643] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:19.454 [2024-07-15 15:13:57.332047] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:19.454 [2024-07-15 15:13:57.332078] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:19.454 [2024-07-15 15:13:57.340048] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:19.454 [2024-07-15 15:13:57.340734] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:19.454 [2024-07-15 15:13:57.348119] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:19.454 15:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.454 15:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:19.454 15:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:19.454 15:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:19.454 15:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.454 15:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:19.713 15:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.713 15:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:19.713 15:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:19.713 15:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.713 15:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:19.713 [2024-07-15 15:13:57.718174] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:19.713 [2024-07-15 15:13:57.718609] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:19.713 [2024-07-15 15:13:57.718631] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:19.713 [2024-07-15 15:13:57.718640] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:19.713 [2024-07-15 15:13:57.726378] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:19.713 [2024-07-15 15:13:57.726406] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:19.713 [2024-07-15 15:13:57.734083] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:19.713 [2024-07-15 15:13:57.734779] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:19.713 [2024-07-15 15:13:57.739155] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:19.713 15:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.713 15:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:19.713 15:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:19.713 15:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:18:19.713 15:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.713 15:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:19.972 15:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.972 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:18:20.230 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:18:20.230 15:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.230 15:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:20.230 [2024-07-15 15:13:58.093186] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:18:20.230 [2024-07-15 15:13:58.093556] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:18:20.230 [2024-07-15 15:13:58.093573] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:18:20.230 [2024-07-15 15:13:58.093582] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:18:20.230 [2024-07-15 15:13:58.101036] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:20.230 [2024-07-15 15:13:58.101066] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:20.230 [2024-07-15 15:13:58.109031] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:20.230 [2024-07-15 15:13:58.109650] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:18:20.231 [2024-07-15 15:13:58.118064] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:18:20.231 { 00:18:20.231 "ublk_device": "/dev/ublkb0", 00:18:20.231 "id": 0, 00:18:20.231 "queue_depth": 512, 00:18:20.231 "num_queues": 4, 00:18:20.231 "bdev_name": "Malloc0" 00:18:20.231 }, 00:18:20.231 { 00:18:20.231 "ublk_device": "/dev/ublkb1", 00:18:20.231 "id": 1, 00:18:20.231 "queue_depth": 512, 00:18:20.231 "num_queues": 4, 00:18:20.231 "bdev_name": "Malloc1" 00:18:20.231 }, 00:18:20.231 { 00:18:20.231 "ublk_device": "/dev/ublkb2", 00:18:20.231 "id": 2, 00:18:20.231 "queue_depth": 512, 00:18:20.231 "num_queues": 4, 00:18:20.231 "bdev_name": "Malloc2" 00:18:20.231 }, 00:18:20.231 { 00:18:20.231 "ublk_device": "/dev/ublkb3", 00:18:20.231 "id": 3, 00:18:20.231 "queue_depth": 512, 00:18:20.231 "num_queues": 4, 00:18:20.231 "bdev_name": "Malloc3" 00:18:20.231 } 00:18:20.231 ]' 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:20.231 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:18:20.489 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:20.489 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:18:20.489 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:20.489 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:20.489 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:18:20.489 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:18:20.489 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:18:20.489 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:18:20.489 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:18:20.489 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:20.489 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:18:20.489 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:20.489 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:18:20.747 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:18:20.747 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:20.747 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:18:20.747 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:18:20.747 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:18:20.747 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:18:20.747 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:18:20.747 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:20.747 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:18:20.747 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:20.747 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:18:21.005 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:18:21.005 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:21.005 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:18:21.005 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:18:21.005 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:18:21.005 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:18:21.005 15:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:18:21.005 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:21.005 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:18:21.005 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:21.005 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:18:21.263 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:18:21.263 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:18:21.263 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:18:21.263 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:21.263 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:21.264 [2024-07-15 15:13:59.146185] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:21.264 [2024-07-15 15:13:59.182546] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:21.264 [2024-07-15 15:13:59.187412] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:21.264 [2024-07-15 15:13:59.195184] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:21.264 [2024-07-15 15:13:59.195551] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:21.264 [2024-07-15 15:13:59.195568] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:21.264 [2024-07-15 15:13:59.211164] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:21.264 [2024-07-15 15:13:59.250111] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:21.264 [2024-07-15 15:13:59.251493] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:21.264 [2024-07-15 15:13:59.266093] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:21.264 [2024-07-15 15:13:59.266483] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:21.264 [2024-07-15 15:13:59.266500] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:21.264 [2024-07-15 15:13:59.281220] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:18:21.264 [2024-07-15 15:13:59.312126] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:21.264 [2024-07-15 15:13:59.313373] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:18:21.264 [2024-07-15 15:13:59.320132] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:21.264 [2024-07-15 15:13:59.320457] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:18:21.264 [2024-07-15 15:13:59.320473] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.264 15:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:21.264 [2024-07-15 15:13:59.336188] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:18:21.264 [2024-07-15 15:13:59.368114] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:21.264 [2024-07-15 15:13:59.369317] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:18:21.523 [2024-07-15 15:13:59.376021] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:21.523 [2024-07-15 15:13:59.376333] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:18:21.523 [2024-07-15 15:13:59.376349] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:18:21.523 15:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.523 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:18:21.523 [2024-07-15 15:13:59.580168] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:21.523 [2024-07-15 15:13:59.587018] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:21.523 [2024-07-15 15:13:59.587069] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:21.523 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:18:21.523 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:21.523 15:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:21.523 15:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.523 15:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.090 15:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.090 15:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:22.090 15:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:22.090 15:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.090 15:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.349 15:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.349 15:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:22.349 15:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:22.349 15:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.349 15:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:22.938 15:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.938 15:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:22.938 15:14:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:22.938 15:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.938 15:14:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:23.197 15:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.197 15:14:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:23.197 15:14:01 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:23.197 15:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.197 15:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:23.197 15:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.197 15:14:01 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:23.197 15:14:01 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:23.197 15:14:01 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:23.197 15:14:01 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:23.198 15:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.198 15:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:23.457 15:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.457 15:14:01 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:23.457 15:14:01 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:23.457 15:14:01 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:23.457 00:18:23.457 real 0m4.794s 00:18:23.457 user 0m1.139s 00:18:23.457 sys 0m0.242s 00:18:23.457 15:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:23.457 15:14:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:23.457 ************************************ 00:18:23.457 END TEST test_create_multi_ublk 00:18:23.457 ************************************ 00:18:23.457 15:14:01 ublk -- common/autotest_common.sh@1142 -- # return 0 00:18:23.457 15:14:01 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:23.457 15:14:01 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:23.457 15:14:01 ublk -- ublk/ublk.sh@130 -- # killprocess 78634 00:18:23.457 15:14:01 ublk -- common/autotest_common.sh@948 -- # '[' -z 78634 ']' 00:18:23.457 15:14:01 ublk -- common/autotest_common.sh@952 -- # kill -0 78634 00:18:23.457 15:14:01 ublk -- common/autotest_common.sh@953 -- # uname 00:18:23.457 15:14:01 ublk -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:23.457 15:14:01 ublk -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78634 00:18:23.457 15:14:01 ublk -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:23.457 15:14:01 ublk -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:23.457 killing process with pid 78634 00:18:23.457 15:14:01 ublk -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78634' 00:18:23.457 15:14:01 ublk -- common/autotest_common.sh@967 -- # kill 78634 00:18:23.457 15:14:01 ublk -- common/autotest_common.sh@972 -- # wait 78634 00:18:24.834 [2024-07-15 15:14:02.762053] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:24.834 [2024-07-15 15:14:02.762150] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:26.214 00:18:26.214 real 0m31.095s 00:18:26.214 user 0m46.584s 00:18:26.214 sys 0m8.251s 00:18:26.214 15:14:04 ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:26.214 15:14:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.214 ************************************ 00:18:26.214 END TEST ublk 00:18:26.214 ************************************ 00:18:26.214 15:14:04 -- common/autotest_common.sh@1142 -- # return 0 00:18:26.214 15:14:04 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:26.214 15:14:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:26.214 15:14:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.214 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:18:26.214 ************************************ 00:18:26.214 START TEST ublk_recovery 00:18:26.214 ************************************ 00:18:26.214 15:14:04 ublk_recovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:26.474 * Looking for test storage... 00:18:26.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:26.474 15:14:04 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:26.474 15:14:04 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:26.474 15:14:04 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:26.474 15:14:04 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:26.474 15:14:04 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:26.474 15:14:04 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:26.474 15:14:04 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:26.474 15:14:04 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:26.474 15:14:04 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:26.474 15:14:04 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:26.474 15:14:04 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=79037 00:18:26.474 15:14:04 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:26.474 15:14:04 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:26.474 15:14:04 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 79037 00:18:26.474 15:14:04 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 79037 ']' 00:18:26.474 15:14:04 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.474 15:14:04 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.474 15:14:04 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.474 15:14:04 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.474 15:14:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.474 [2024-07-15 15:14:04.501783] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:18:26.474 [2024-07-15 15:14:04.501905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79037 ] 00:18:26.733 [2024-07-15 15:14:04.656504] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:26.992 [2024-07-15 15:14:04.908313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.992 [2024-07-15 15:14:04.908347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.930 15:14:05 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.930 15:14:05 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:18:27.930 15:14:05 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:27.930 15:14:05 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.930 15:14:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:27.930 [2024-07-15 15:14:05.900052] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:27.930 [2024-07-15 15:14:05.903247] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:27.930 15:14:05 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.930 15:14:05 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:27.930 15:14:05 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.930 15:14:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.190 malloc0 00:18:28.190 15:14:06 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.190 15:14:06 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:28.190 15:14:06 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.190 15:14:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.190 [2024-07-15 15:14:06.089202] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:28.190 [2024-07-15 15:14:06.089327] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:28.190 [2024-07-15 15:14:06.089341] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:28.190 [2024-07-15 15:14:06.089352] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:28.190 [2024-07-15 15:14:06.097186] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:28.190 [2024-07-15 15:14:06.097222] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:28.190 [2024-07-15 15:14:06.105039] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:28.190 [2024-07-15 15:14:06.105245] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:28.190 [2024-07-15 15:14:06.119153] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:28.190 1 00:18:28.190 15:14:06 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.190 15:14:06 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:29.130 15:14:07 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=79079 00:18:29.130 15:14:07 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:29.130 15:14:07 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:29.130 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:29.130 fio-3.35 00:18:29.130 Starting 1 process 00:18:34.423 15:14:12 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 79037 00:18:34.423 15:14:12 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:39.686 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 79037 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:39.686 15:14:17 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=79185 00:18:39.686 15:14:17 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:39.686 15:14:17 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:39.686 15:14:17 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 79185 00:18:39.686 15:14:17 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 79185 ']' 00:18:39.686 15:14:17 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.686 15:14:17 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.686 15:14:17 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.686 15:14:17 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.686 15:14:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.686 [2024-07-15 15:14:17.236458] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:18:39.687 [2024-07-15 15:14:17.236573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79185 ] 00:18:39.687 [2024-07-15 15:14:17.404283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:39.687 [2024-07-15 15:14:17.652099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.687 [2024-07-15 15:14:17.652138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.622 15:14:18 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.622 15:14:18 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:18:40.622 15:14:18 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:40.622 15:14:18 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.622 15:14:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.622 [2024-07-15 15:14:18.648025] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:40.622 [2024-07-15 15:14:18.651226] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:40.622 15:14:18 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.622 15:14:18 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:40.622 15:14:18 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.622 15:14:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.908 malloc0 00:18:40.908 15:14:18 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.908 15:14:18 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:40.908 15:14:18 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.908 15:14:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.908 [2024-07-15 15:14:18.838216] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:40.908 [2024-07-15 15:14:18.838266] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:40.908 [2024-07-15 15:14:18.838276] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:40.908 [2024-07-15 15:14:18.846104] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:40.908 [2024-07-15 15:14:18.846135] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:40.908 [2024-07-15 15:14:18.846241] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:40.908 1 00:18:40.908 15:14:18 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.908 15:14:18 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 79079 00:18:40.908 [2024-07-15 15:14:18.854044] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:40.908 [2024-07-15 15:14:18.861615] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:40.908 [2024-07-15 15:14:18.869312] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:40.908 [2024-07-15 15:14:18.869355] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:37.161 00:19:37.161 fio_test: (groupid=0, jobs=1): err= 0: pid=79082: Mon Jul 15 15:15:07 2024 00:19:37.161 read: IOPS=19.7k, BW=77.1MiB/s (80.8MB/s)(4624MiB/60002msec) 00:19:37.161 slat (nsec): min=1142, max=860017, avg=7753.97, stdev=3570.91 00:19:37.161 clat (usec): min=1026, max=6754.0k, avg=3202.94, stdev=50397.80 00:19:37.161 lat (usec): min=1094, max=6754.0k, avg=3210.69, stdev=50397.81 00:19:37.161 clat percentiles (usec): 00:19:37.161 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2573], 00:19:37.161 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:19:37.161 | 70.00th=[ 2769], 80.00th=[ 2835], 90.00th=[ 3195], 95.00th=[ 3982], 00:19:37.161 | 99.00th=[ 5669], 99.50th=[ 6259], 99.90th=[ 7963], 99.95th=[ 8717], 00:19:37.161 | 99.99th=[13173] 00:19:37.161 bw ( KiB/s): min=46984, max=101408, per=100.00%, avg=88611.72, stdev=8704.24, samples=106 00:19:37.161 iops : min=11746, max=25352, avg=22152.91, stdev=2176.06, samples=106 00:19:37.161 write: IOPS=19.7k, BW=77.0MiB/s (80.8MB/s)(4622MiB/60002msec); 0 zone resets 00:19:37.161 slat (nsec): min=1331, max=934936, avg=7920.51, stdev=3585.83 00:19:37.161 clat (usec): min=1002, max=6754.2k, avg=3269.28, stdev=48863.45 00:19:37.161 lat (usec): min=1009, max=6754.2k, avg=3277.20, stdev=48863.46 00:19:37.161 clat percentiles (usec): 00:19:37.161 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:19:37.161 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2868], 00:19:37.161 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3228], 95.00th=[ 3949], 00:19:37.161 | 99.00th=[ 5669], 99.50th=[ 6325], 99.90th=[ 8094], 99.95th=[ 8848], 00:19:37.161 | 99.99th=[13435] 00:19:37.161 bw ( KiB/s): min=46664, max=102680, per=100.00%, avg=88564.44, stdev=8661.59, samples=106 00:19:37.161 iops : min=11666, max=25670, avg=22141.09, stdev=2165.40, samples=106 00:19:37.161 lat (msec) : 2=0.31%, 4=94.82%, 10=4.83%, 20=0.02%, >=2000=0.01% 00:19:37.161 cpu : usr=8.87%, sys=30.71%, ctx=95647, majf=0, minf=13 00:19:37.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:37.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:37.161 issued rwts: total=1183733,1183132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:37.161 00:19:37.161 Run status group 0 (all jobs): 00:19:37.161 READ: bw=77.1MiB/s (80.8MB/s), 77.1MiB/s-77.1MiB/s (80.8MB/s-80.8MB/s), io=4624MiB (4849MB), run=60002-60002msec 00:19:37.161 WRITE: bw=77.0MiB/s (80.8MB/s), 77.0MiB/s-77.0MiB/s (80.8MB/s-80.8MB/s), io=4622MiB (4846MB), run=60002-60002msec 00:19:37.161 00:19:37.161 Disk stats (read/write): 00:19:37.161 ublkb1: ios=1181384/1180842, merge=0/0, ticks=3684565/3635271, in_queue=7319836, util=99.95% 00:19:37.161 15:15:07 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:37.161 15:15:07 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.161 15:15:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.161 [2024-07-15 15:15:07.389845] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:37.161 [2024-07-15 15:15:07.427197] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:37.161 [2024-07-15 15:15:07.427495] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:37.161 [2024-07-15 15:15:07.435068] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:37.161 [2024-07-15 15:15:07.435236] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:37.161 [2024-07-15 15:15:07.435249] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:37.161 15:15:07 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.161 15:15:07 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:37.161 15:15:07 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.161 15:15:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.161 [2024-07-15 15:15:07.446186] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:19:37.161 [2024-07-15 15:15:07.458369] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:19:37.161 [2024-07-15 15:15:07.458455] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:37.161 15:15:07 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.161 15:15:07 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:37.161 15:15:07 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:37.161 15:15:07 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 79185 00:19:37.161 15:15:07 ublk_recovery -- common/autotest_common.sh@948 -- # '[' -z 79185 ']' 00:19:37.161 15:15:07 ublk_recovery -- common/autotest_common.sh@952 -- # kill -0 79185 00:19:37.161 15:15:07 ublk_recovery -- common/autotest_common.sh@953 -- # uname 00:19:37.161 15:15:07 ublk_recovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.161 15:15:07 ublk_recovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79185 00:19:37.161 15:15:07 ublk_recovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:37.161 15:15:07 ublk_recovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:37.161 killing process with pid 79185 00:19:37.161 15:15:07 ublk_recovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79185' 00:19:37.161 15:15:07 ublk_recovery -- common/autotest_common.sh@967 -- # kill 79185 00:19:37.162 15:15:07 ublk_recovery -- common/autotest_common.sh@972 -- # wait 79185 00:19:37.162 [2024-07-15 15:15:08.881185] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:19:37.162 [2024-07-15 15:15:08.881262] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:19:37.162 00:19:37.162 real 1m6.335s 00:19:37.162 user 1m50.288s 00:19:37.162 sys 0m34.904s 00:19:37.162 15:15:10 ublk_recovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:37.162 ************************************ 00:19:37.162 END TEST ublk_recovery 00:19:37.162 ************************************ 00:19:37.162 15:15:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.162 15:15:10 -- common/autotest_common.sh@1142 -- # return 0 00:19:37.162 15:15:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:37.162 15:15:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:37.162 15:15:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:37.162 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:37.162 15:15:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:37.162 15:15:10 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:19:37.162 15:15:10 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:19:37.162 15:15:10 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:19:37.162 15:15:10 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:19:37.162 15:15:10 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:19:37.162 15:15:10 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:19:37.162 15:15:10 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:19:37.162 15:15:10 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:19:37.162 15:15:10 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:19:37.162 15:15:10 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:37.162 15:15:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:37.162 15:15:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:37.162 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:37.162 ************************************ 00:19:37.162 START TEST ftl 00:19:37.162 ************************************ 00:19:37.162 15:15:10 ftl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:37.162 * Looking for test storage... 00:19:37.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:37.162 15:15:10 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:37.162 15:15:10 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:37.162 15:15:10 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:37.162 15:15:10 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:37.162 15:15:10 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:37.162 15:15:10 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:37.162 15:15:10 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:37.162 15:15:10 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:37.162 15:15:10 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:37.162 15:15:10 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:37.162 15:15:10 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:37.162 15:15:10 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:37.162 15:15:10 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:37.162 15:15:10 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:37.162 15:15:10 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:37.162 15:15:10 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:37.162 15:15:10 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:37.162 15:15:10 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:37.162 15:15:10 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:37.162 15:15:10 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:37.162 15:15:10 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:37.162 15:15:10 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:37.162 15:15:10 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:37.162 15:15:10 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:37.162 15:15:10 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:37.162 15:15:10 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:37.162 15:15:10 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:37.162 15:15:10 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:37.162 15:15:10 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:37.162 15:15:10 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:37.162 15:15:10 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:37.162 15:15:10 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:37.162 15:15:10 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:37.162 15:15:10 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:37.162 15:15:10 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:37.162 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:37.162 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:37.162 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:37.162 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:37.162 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:37.162 15:15:11 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=79977 00:19:37.162 15:15:11 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:37.162 15:15:11 ftl -- ftl/ftl.sh@38 -- # waitforlisten 79977 00:19:37.162 15:15:11 ftl -- common/autotest_common.sh@829 -- # '[' -z 79977 ']' 00:19:37.162 15:15:11 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.162 15:15:11 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.162 15:15:11 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.162 15:15:11 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.162 15:15:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:37.162 [2024-07-15 15:15:11.732886] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:19:37.162 [2024-07-15 15:15:11.733064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79977 ] 00:19:37.162 [2024-07-15 15:15:11.900047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.162 [2024-07-15 15:15:12.163177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.162 15:15:12 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.162 15:15:12 ftl -- common/autotest_common.sh@862 -- # return 0 00:19:37.162 15:15:12 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:37.162 15:15:12 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:37.162 15:15:13 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:37.162 15:15:13 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@50 -- # break 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@63 -- # break 00:19:37.162 15:15:14 ftl -- ftl/ftl.sh@66 -- # killprocess 79977 00:19:37.162 15:15:14 ftl -- common/autotest_common.sh@948 -- # '[' -z 79977 ']' 00:19:37.162 15:15:14 ftl -- common/autotest_common.sh@952 -- # kill -0 79977 00:19:37.162 15:15:14 ftl -- common/autotest_common.sh@953 -- # uname 00:19:37.162 15:15:14 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.162 15:15:14 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79977 00:19:37.162 killing process with pid 79977 00:19:37.162 15:15:14 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:37.162 15:15:14 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:37.162 15:15:14 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79977' 00:19:37.162 15:15:14 ftl -- common/autotest_common.sh@967 -- # kill 79977 00:19:37.162 15:15:14 ftl -- common/autotest_common.sh@972 -- # wait 79977 00:19:39.697 15:15:17 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:39.697 15:15:17 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:39.697 15:15:17 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:39.697 15:15:17 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.697 15:15:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:39.697 ************************************ 00:19:39.697 START TEST ftl_fio_basic 00:19:39.697 ************************************ 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:39.697 * Looking for test storage... 00:19:39.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=80122 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 80122 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@829 -- # '[' -z 80122 ']' 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:39.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:39.697 15:15:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:39.956 [2024-07-15 15:15:17.869939] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:19:39.956 [2024-07-15 15:15:17.870093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80122 ] 00:19:39.956 [2024-07-15 15:15:18.028709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:40.215 [2024-07-15 15:15:18.290018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.215 [2024-07-15 15:15:18.290093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.215 [2024-07-15 15:15:18.290123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # return 0 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:41.590 15:15:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:41.850 15:15:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:41.850 { 00:19:41.850 "name": "nvme0n1", 00:19:41.850 "aliases": [ 00:19:41.850 "bb25a1f7-0ad8-4032-a4bc-e94868d88ded" 00:19:41.850 ], 00:19:41.850 "product_name": "NVMe disk", 00:19:41.850 "block_size": 4096, 00:19:41.850 "num_blocks": 1310720, 00:19:41.850 "uuid": "bb25a1f7-0ad8-4032-a4bc-e94868d88ded", 00:19:41.850 "assigned_rate_limits": { 00:19:41.850 "rw_ios_per_sec": 0, 00:19:41.850 "rw_mbytes_per_sec": 0, 00:19:41.850 "r_mbytes_per_sec": 0, 00:19:41.850 "w_mbytes_per_sec": 0 00:19:41.850 }, 00:19:41.850 "claimed": false, 00:19:41.850 "zoned": false, 00:19:41.850 "supported_io_types": { 00:19:41.850 "read": true, 00:19:41.850 "write": true, 00:19:41.850 "unmap": true, 00:19:41.850 "flush": true, 00:19:41.850 "reset": true, 00:19:41.850 "nvme_admin": true, 00:19:41.850 "nvme_io": true, 00:19:41.850 "nvme_io_md": false, 00:19:41.850 "write_zeroes": true, 00:19:41.850 "zcopy": false, 00:19:41.850 "get_zone_info": false, 00:19:41.850 "zone_management": false, 00:19:41.850 "zone_append": false, 00:19:41.850 "compare": true, 00:19:41.850 "compare_and_write": false, 00:19:41.850 "abort": true, 00:19:41.850 "seek_hole": false, 00:19:41.850 "seek_data": false, 00:19:41.850 "copy": true, 00:19:41.850 "nvme_iov_md": false 00:19:41.850 }, 00:19:41.850 "driver_specific": { 00:19:41.850 "nvme": [ 00:19:41.850 { 00:19:41.850 "pci_address": "0000:00:11.0", 00:19:41.850 "trid": { 00:19:41.850 "trtype": "PCIe", 00:19:41.850 "traddr": "0000:00:11.0" 00:19:41.850 }, 00:19:41.850 "ctrlr_data": { 00:19:41.850 "cntlid": 0, 00:19:41.850 "vendor_id": "0x1b36", 00:19:41.850 "model_number": "QEMU NVMe Ctrl", 00:19:41.850 "serial_number": "12341", 00:19:41.850 "firmware_revision": "8.0.0", 00:19:41.850 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:41.850 "oacs": { 00:19:41.850 "security": 0, 00:19:41.850 "format": 1, 00:19:41.850 "firmware": 0, 00:19:41.850 "ns_manage": 1 00:19:41.850 }, 00:19:41.850 "multi_ctrlr": false, 00:19:41.850 "ana_reporting": false 00:19:41.850 }, 00:19:41.850 "vs": { 00:19:41.850 "nvme_version": "1.4" 00:19:41.850 }, 00:19:41.850 "ns_data": { 00:19:41.850 "id": 1, 00:19:41.850 "can_share": false 00:19:41.850 } 00:19:41.850 } 00:19:41.850 ], 00:19:41.850 "mp_policy": "active_passive" 00:19:41.850 } 00:19:41.850 } 00:19:41.850 ]' 00:19:41.850 15:15:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:41.850 15:15:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:41.850 15:15:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:42.110 15:15:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:42.110 15:15:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:42.110 15:15:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:19:42.110 15:15:19 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:42.110 15:15:19 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:42.110 15:15:19 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:42.110 15:15:19 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:42.110 15:15:19 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:42.110 15:15:20 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:42.110 15:15:20 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:42.370 15:15:20 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=c81370f5-d6c3-46f8-8faf-eae9fd58e642 00:19:42.370 15:15:20 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c81370f5-d6c3-46f8-8faf-eae9fd58e642 00:19:42.630 15:15:20 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=1ab9dc4b-eafd-4134-8a96-74197eb134d1 00:19:42.630 15:15:20 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1ab9dc4b-eafd-4134-8a96-74197eb134d1 00:19:42.630 15:15:20 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:42.630 15:15:20 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:42.630 15:15:20 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=1ab9dc4b-eafd-4134-8a96-74197eb134d1 00:19:42.630 15:15:20 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:42.630 15:15:20 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 1ab9dc4b-eafd-4134-8a96-74197eb134d1 00:19:42.630 15:15:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=1ab9dc4b-eafd-4134-8a96-74197eb134d1 00:19:42.630 15:15:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:42.630 15:15:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:42.630 15:15:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:42.630 15:15:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1ab9dc4b-eafd-4134-8a96-74197eb134d1 00:19:42.890 15:15:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:42.890 { 00:19:42.890 "name": "1ab9dc4b-eafd-4134-8a96-74197eb134d1", 00:19:42.890 "aliases": [ 00:19:42.890 "lvs/nvme0n1p0" 00:19:42.890 ], 00:19:42.890 "product_name": "Logical Volume", 00:19:42.890 "block_size": 4096, 00:19:42.890 "num_blocks": 26476544, 00:19:42.890 "uuid": "1ab9dc4b-eafd-4134-8a96-74197eb134d1", 00:19:42.890 "assigned_rate_limits": { 00:19:42.890 "rw_ios_per_sec": 0, 00:19:42.890 "rw_mbytes_per_sec": 0, 00:19:42.890 "r_mbytes_per_sec": 0, 00:19:42.890 "w_mbytes_per_sec": 0 00:19:42.890 }, 00:19:42.890 "claimed": false, 00:19:42.890 "zoned": false, 00:19:42.890 "supported_io_types": { 00:19:42.890 "read": true, 00:19:42.890 "write": true, 00:19:42.890 "unmap": true, 00:19:42.890 "flush": false, 00:19:42.890 "reset": true, 00:19:42.890 "nvme_admin": false, 00:19:42.890 "nvme_io": false, 00:19:42.891 "nvme_io_md": false, 00:19:42.891 "write_zeroes": true, 00:19:42.891 "zcopy": false, 00:19:42.891 "get_zone_info": false, 00:19:42.891 "zone_management": false, 00:19:42.891 "zone_append": false, 00:19:42.891 "compare": false, 00:19:42.891 "compare_and_write": false, 00:19:42.891 "abort": false, 00:19:42.891 "seek_hole": true, 00:19:42.891 "seek_data": true, 00:19:42.891 "copy": false, 00:19:42.891 "nvme_iov_md": false 00:19:42.891 }, 00:19:42.891 "driver_specific": { 00:19:42.891 "lvol": { 00:19:42.891 "lvol_store_uuid": "c81370f5-d6c3-46f8-8faf-eae9fd58e642", 00:19:42.891 "base_bdev": "nvme0n1", 00:19:42.891 "thin_provision": true, 00:19:42.891 "num_allocated_clusters": 0, 00:19:42.891 "snapshot": false, 00:19:42.891 "clone": false, 00:19:42.891 "esnap_clone": false 00:19:42.891 } 00:19:42.891 } 00:19:42.891 } 00:19:42.891 ]' 00:19:42.891 15:15:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:42.891 15:15:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:42.891 15:15:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:42.891 15:15:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:42.891 15:15:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:42.891 15:15:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:42.891 15:15:20 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:42.891 15:15:20 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:42.891 15:15:20 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:43.149 15:15:21 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:43.149 15:15:21 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:43.149 15:15:21 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 1ab9dc4b-eafd-4134-8a96-74197eb134d1 00:19:43.149 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=1ab9dc4b-eafd-4134-8a96-74197eb134d1 00:19:43.149 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:43.149 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:43.149 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:43.149 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1ab9dc4b-eafd-4134-8a96-74197eb134d1 00:19:43.407 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:43.407 { 00:19:43.407 "name": "1ab9dc4b-eafd-4134-8a96-74197eb134d1", 00:19:43.407 "aliases": [ 00:19:43.407 "lvs/nvme0n1p0" 00:19:43.407 ], 00:19:43.407 "product_name": "Logical Volume", 00:19:43.407 "block_size": 4096, 00:19:43.407 "num_blocks": 26476544, 00:19:43.407 "uuid": "1ab9dc4b-eafd-4134-8a96-74197eb134d1", 00:19:43.407 "assigned_rate_limits": { 00:19:43.407 "rw_ios_per_sec": 0, 00:19:43.407 "rw_mbytes_per_sec": 0, 00:19:43.407 "r_mbytes_per_sec": 0, 00:19:43.407 "w_mbytes_per_sec": 0 00:19:43.407 }, 00:19:43.407 "claimed": false, 00:19:43.408 "zoned": false, 00:19:43.408 "supported_io_types": { 00:19:43.408 "read": true, 00:19:43.408 "write": true, 00:19:43.408 "unmap": true, 00:19:43.408 "flush": false, 00:19:43.408 "reset": true, 00:19:43.408 "nvme_admin": false, 00:19:43.408 "nvme_io": false, 00:19:43.408 "nvme_io_md": false, 00:19:43.408 "write_zeroes": true, 00:19:43.408 "zcopy": false, 00:19:43.408 "get_zone_info": false, 00:19:43.408 "zone_management": false, 00:19:43.408 "zone_append": false, 00:19:43.408 "compare": false, 00:19:43.408 "compare_and_write": false, 00:19:43.408 "abort": false, 00:19:43.408 "seek_hole": true, 00:19:43.408 "seek_data": true, 00:19:43.408 "copy": false, 00:19:43.408 "nvme_iov_md": false 00:19:43.408 }, 00:19:43.408 "driver_specific": { 00:19:43.408 "lvol": { 00:19:43.408 "lvol_store_uuid": "c81370f5-d6c3-46f8-8faf-eae9fd58e642", 00:19:43.408 "base_bdev": "nvme0n1", 00:19:43.408 "thin_provision": true, 00:19:43.408 "num_allocated_clusters": 0, 00:19:43.408 "snapshot": false, 00:19:43.408 "clone": false, 00:19:43.408 "esnap_clone": false 00:19:43.408 } 00:19:43.408 } 00:19:43.408 } 00:19:43.408 ]' 00:19:43.408 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:43.408 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:43.408 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:43.666 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:43.666 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:43.666 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:43.666 15:15:21 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:43.666 15:15:21 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:43.666 15:15:21 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:43.666 15:15:21 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:43.666 15:15:21 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:43.666 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:43.666 15:15:21 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 1ab9dc4b-eafd-4134-8a96-74197eb134d1 00:19:43.666 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=1ab9dc4b-eafd-4134-8a96-74197eb134d1 00:19:43.667 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:43.667 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:43.667 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:43.667 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1ab9dc4b-eafd-4134-8a96-74197eb134d1 00:19:43.926 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:43.926 { 00:19:43.926 "name": "1ab9dc4b-eafd-4134-8a96-74197eb134d1", 00:19:43.926 "aliases": [ 00:19:43.926 "lvs/nvme0n1p0" 00:19:43.926 ], 00:19:43.926 "product_name": "Logical Volume", 00:19:43.926 "block_size": 4096, 00:19:43.926 "num_blocks": 26476544, 00:19:43.926 "uuid": "1ab9dc4b-eafd-4134-8a96-74197eb134d1", 00:19:43.926 "assigned_rate_limits": { 00:19:43.926 "rw_ios_per_sec": 0, 00:19:43.926 "rw_mbytes_per_sec": 0, 00:19:43.926 "r_mbytes_per_sec": 0, 00:19:43.926 "w_mbytes_per_sec": 0 00:19:43.926 }, 00:19:43.926 "claimed": false, 00:19:43.926 "zoned": false, 00:19:43.926 "supported_io_types": { 00:19:43.926 "read": true, 00:19:43.926 "write": true, 00:19:43.926 "unmap": true, 00:19:43.926 "flush": false, 00:19:43.926 "reset": true, 00:19:43.926 "nvme_admin": false, 00:19:43.926 "nvme_io": false, 00:19:43.926 "nvme_io_md": false, 00:19:43.926 "write_zeroes": true, 00:19:43.926 "zcopy": false, 00:19:43.926 "get_zone_info": false, 00:19:43.926 "zone_management": false, 00:19:43.926 "zone_append": false, 00:19:43.926 "compare": false, 00:19:43.926 "compare_and_write": false, 00:19:43.926 "abort": false, 00:19:43.926 "seek_hole": true, 00:19:43.926 "seek_data": true, 00:19:43.926 "copy": false, 00:19:43.926 "nvme_iov_md": false 00:19:43.926 }, 00:19:43.926 "driver_specific": { 00:19:43.926 "lvol": { 00:19:43.926 "lvol_store_uuid": "c81370f5-d6c3-46f8-8faf-eae9fd58e642", 00:19:43.926 "base_bdev": "nvme0n1", 00:19:43.926 "thin_provision": true, 00:19:43.926 "num_allocated_clusters": 0, 00:19:43.926 "snapshot": false, 00:19:43.926 "clone": false, 00:19:43.926 "esnap_clone": false 00:19:43.926 } 00:19:43.926 } 00:19:43.926 } 00:19:43.926 ]' 00:19:43.926 15:15:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:43.926 15:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:43.926 15:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:44.186 15:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:44.186 15:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:44.186 15:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:44.186 15:15:22 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:44.186 15:15:22 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:44.186 15:15:22 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1ab9dc4b-eafd-4134-8a96-74197eb134d1 -c nvc0n1p0 --l2p_dram_limit 60 00:19:44.186 [2024-07-15 15:15:22.285184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.186 [2024-07-15 15:15:22.285265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:44.186 [2024-07-15 15:15:22.285283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:44.186 [2024-07-15 15:15:22.285294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.186 [2024-07-15 15:15:22.285397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.186 [2024-07-15 15:15:22.285411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:44.186 [2024-07-15 15:15:22.285420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:44.186 [2024-07-15 15:15:22.285430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.186 [2024-07-15 15:15:22.285480] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:44.186 [2024-07-15 15:15:22.286844] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:44.186 [2024-07-15 15:15:22.286878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.186 [2024-07-15 15:15:22.286894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:44.186 [2024-07-15 15:15:22.286906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:19:44.186 [2024-07-15 15:15:22.286917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.186 [2024-07-15 15:15:22.287058] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 210f65b8-b18e-45ac-848f-3ef4c566f1e0 00:19:44.186 [2024-07-15 15:15:22.288671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.186 [2024-07-15 15:15:22.288711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:44.186 [2024-07-15 15:15:22.288729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:44.186 [2024-07-15 15:15:22.288738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.186 [2024-07-15 15:15:22.296745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.186 [2024-07-15 15:15:22.296793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:44.186 [2024-07-15 15:15:22.296809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.850 ms 00:19:44.186 [2024-07-15 15:15:22.296822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.186 [2024-07-15 15:15:22.297003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.186 [2024-07-15 15:15:22.297041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:44.186 [2024-07-15 15:15:22.297054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:44.186 [2024-07-15 15:15:22.297063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.186 [2024-07-15 15:15:22.297189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.186 [2024-07-15 15:15:22.297201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:44.446 [2024-07-15 15:15:22.297214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:44.446 [2024-07-15 15:15:22.297223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.446 [2024-07-15 15:15:22.297316] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:44.446 [2024-07-15 15:15:22.304248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.446 [2024-07-15 15:15:22.304306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:44.446 [2024-07-15 15:15:22.304325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.977 ms 00:19:44.446 [2024-07-15 15:15:22.304336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.446 [2024-07-15 15:15:22.304429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.446 [2024-07-15 15:15:22.304441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:44.446 [2024-07-15 15:15:22.304451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:44.446 [2024-07-15 15:15:22.304461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.446 [2024-07-15 15:15:22.304550] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:44.446 [2024-07-15 15:15:22.304759] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:44.446 [2024-07-15 15:15:22.304789] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:44.446 [2024-07-15 15:15:22.304806] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:44.446 [2024-07-15 15:15:22.304819] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:44.446 [2024-07-15 15:15:22.304830] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:44.446 [2024-07-15 15:15:22.304840] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:44.446 [2024-07-15 15:15:22.304851] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:44.446 [2024-07-15 15:15:22.304859] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:44.446 [2024-07-15 15:15:22.304872] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:44.446 [2024-07-15 15:15:22.304882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.446 [2024-07-15 15:15:22.304892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:44.446 [2024-07-15 15:15:22.304902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:19:44.446 [2024-07-15 15:15:22.304911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.446 [2024-07-15 15:15:22.305039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.446 [2024-07-15 15:15:22.305052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:44.446 [2024-07-15 15:15:22.305062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:44.446 [2024-07-15 15:15:22.305072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.446 [2024-07-15 15:15:22.305213] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:44.446 [2024-07-15 15:15:22.305237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:44.446 [2024-07-15 15:15:22.305246] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:44.446 [2024-07-15 15:15:22.305257] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.446 [2024-07-15 15:15:22.305265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:44.446 [2024-07-15 15:15:22.305276] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:44.446 [2024-07-15 15:15:22.305288] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:44.446 [2024-07-15 15:15:22.305297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:44.446 [2024-07-15 15:15:22.305305] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:44.446 [2024-07-15 15:15:22.305314] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:44.446 [2024-07-15 15:15:22.305322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:44.446 [2024-07-15 15:15:22.305332] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:44.446 [2024-07-15 15:15:22.305340] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:44.446 [2024-07-15 15:15:22.305351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:44.446 [2024-07-15 15:15:22.305359] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:44.446 [2024-07-15 15:15:22.305368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.446 [2024-07-15 15:15:22.305375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:44.446 [2024-07-15 15:15:22.305386] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:44.446 [2024-07-15 15:15:22.305394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.446 [2024-07-15 15:15:22.305403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:44.447 [2024-07-15 15:15:22.305410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:44.447 [2024-07-15 15:15:22.305419] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.447 [2024-07-15 15:15:22.305426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:44.447 [2024-07-15 15:15:22.305436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:44.447 [2024-07-15 15:15:22.305443] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.447 [2024-07-15 15:15:22.305452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:44.447 [2024-07-15 15:15:22.305460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:44.447 [2024-07-15 15:15:22.305468] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.447 [2024-07-15 15:15:22.305475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:44.447 [2024-07-15 15:15:22.305484] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:44.447 [2024-07-15 15:15:22.305492] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.447 [2024-07-15 15:15:22.305501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:44.447 [2024-07-15 15:15:22.305508] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:44.447 [2024-07-15 15:15:22.305519] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:44.447 [2024-07-15 15:15:22.305526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:44.447 [2024-07-15 15:15:22.305535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:44.447 [2024-07-15 15:15:22.305542] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:44.447 [2024-07-15 15:15:22.305551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:44.447 [2024-07-15 15:15:22.305561] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:44.447 [2024-07-15 15:15:22.305571] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.447 [2024-07-15 15:15:22.305580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:44.447 [2024-07-15 15:15:22.305590] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:44.447 [2024-07-15 15:15:22.305598] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.447 [2024-07-15 15:15:22.305606] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:44.447 [2024-07-15 15:15:22.305615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:44.447 [2024-07-15 15:15:22.305645] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:44.447 [2024-07-15 15:15:22.305654] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.447 [2024-07-15 15:15:22.305666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:44.447 [2024-07-15 15:15:22.305673] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:44.447 [2024-07-15 15:15:22.305685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:44.447 [2024-07-15 15:15:22.305693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:44.447 [2024-07-15 15:15:22.305702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:44.447 [2024-07-15 15:15:22.305710] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:44.447 [2024-07-15 15:15:22.305723] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:44.447 [2024-07-15 15:15:22.305734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:44.447 [2024-07-15 15:15:22.305746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:44.447 [2024-07-15 15:15:22.305755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:44.447 [2024-07-15 15:15:22.305765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:44.447 [2024-07-15 15:15:22.305773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:44.447 [2024-07-15 15:15:22.305782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:44.447 [2024-07-15 15:15:22.305791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:44.447 [2024-07-15 15:15:22.305800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:44.447 [2024-07-15 15:15:22.305808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:44.447 [2024-07-15 15:15:22.305819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:44.447 [2024-07-15 15:15:22.305827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:44.447 [2024-07-15 15:15:22.305839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:44.447 [2024-07-15 15:15:22.305847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:44.447 [2024-07-15 15:15:22.305856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:44.447 [2024-07-15 15:15:22.305865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:44.447 [2024-07-15 15:15:22.305874] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:44.447 [2024-07-15 15:15:22.305886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:44.447 [2024-07-15 15:15:22.305897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:44.447 [2024-07-15 15:15:22.305905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:44.447 [2024-07-15 15:15:22.305915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:44.447 [2024-07-15 15:15:22.305924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:44.447 [2024-07-15 15:15:22.305935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.447 [2024-07-15 15:15:22.305945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:44.447 [2024-07-15 15:15:22.305956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:19:44.447 [2024-07-15 15:15:22.305964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.447 [2024-07-15 15:15:22.306146] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:44.447 [2024-07-15 15:15:22.306174] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:48.657 [2024-07-15 15:15:26.403286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.658 [2024-07-15 15:15:26.403384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:48.658 [2024-07-15 15:15:26.403407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4105.022 ms 00:19:48.658 [2024-07-15 15:15:26.403417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.658 [2024-07-15 15:15:26.454795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.658 [2024-07-15 15:15:26.454872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:48.658 [2024-07-15 15:15:26.454895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.065 ms 00:19:48.658 [2024-07-15 15:15:26.454906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.658 [2024-07-15 15:15:26.455169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.658 [2024-07-15 15:15:26.455192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:48.658 [2024-07-15 15:15:26.455208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:48.658 [2024-07-15 15:15:26.455219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.658 [2024-07-15 15:15:26.521203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.658 [2024-07-15 15:15:26.521275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:48.658 [2024-07-15 15:15:26.521294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.003 ms 00:19:48.658 [2024-07-15 15:15:26.521304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.658 [2024-07-15 15:15:26.521416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.658 [2024-07-15 15:15:26.521443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:48.658 [2024-07-15 15:15:26.521457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:48.658 [2024-07-15 15:15:26.521466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.658 [2024-07-15 15:15:26.522064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.658 [2024-07-15 15:15:26.522089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:48.658 [2024-07-15 15:15:26.522107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:19:48.658 [2024-07-15 15:15:26.522116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.658 [2024-07-15 15:15:26.522319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.658 [2024-07-15 15:15:26.522342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:48.658 [2024-07-15 15:15:26.522355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:19:48.658 [2024-07-15 15:15:26.522374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.658 [2024-07-15 15:15:26.551152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.658 [2024-07-15 15:15:26.551218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:48.658 [2024-07-15 15:15:26.551237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.764 ms 00:19:48.658 [2024-07-15 15:15:26.551247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.658 [2024-07-15 15:15:26.569011] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:48.658 [2024-07-15 15:15:26.587544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.658 [2024-07-15 15:15:26.587621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:48.658 [2024-07-15 15:15:26.587637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.149 ms 00:19:48.658 [2024-07-15 15:15:26.587648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.658 [2024-07-15 15:15:26.672373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.658 [2024-07-15 15:15:26.672477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:48.658 [2024-07-15 15:15:26.672493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.789 ms 00:19:48.658 [2024-07-15 15:15:26.672504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.658 [2024-07-15 15:15:26.672837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.658 [2024-07-15 15:15:26.672859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:48.658 [2024-07-15 15:15:26.672871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:19:48.658 [2024-07-15 15:15:26.672885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.658 [2024-07-15 15:15:26.720584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.658 [2024-07-15 15:15:26.720681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:48.658 [2024-07-15 15:15:26.720698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.625 ms 00:19:48.658 [2024-07-15 15:15:26.720711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.916 [2024-07-15 15:15:26.769189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.916 [2024-07-15 15:15:26.769283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:48.916 [2024-07-15 15:15:26.769319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.436 ms 00:19:48.916 [2024-07-15 15:15:26.769331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.916 [2024-07-15 15:15:26.770402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.916 [2024-07-15 15:15:26.770438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:48.916 [2024-07-15 15:15:26.770451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:19:48.916 [2024-07-15 15:15:26.770462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.916 [2024-07-15 15:15:26.904014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.916 [2024-07-15 15:15:26.904115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:48.916 [2024-07-15 15:15:26.904134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 133.648 ms 00:19:48.916 [2024-07-15 15:15:26.904150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.916 [2024-07-15 15:15:26.954956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.916 [2024-07-15 15:15:26.955085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:48.916 [2024-07-15 15:15:26.955105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.782 ms 00:19:48.916 [2024-07-15 15:15:26.955118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.916 [2024-07-15 15:15:27.004983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.916 [2024-07-15 15:15:27.005084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:48.916 [2024-07-15 15:15:27.005100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.811 ms 00:19:48.916 [2024-07-15 15:15:27.005111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-15 15:15:27.053438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-15 15:15:27.053525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:49.173 [2024-07-15 15:15:27.053557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.292 ms 00:19:49.173 [2024-07-15 15:15:27.053568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-15 15:15:27.053728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-15 15:15:27.053755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:49.173 [2024-07-15 15:15:27.053766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:49.173 [2024-07-15 15:15:27.053780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-15 15:15:27.054019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-15 15:15:27.054042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:49.173 [2024-07-15 15:15:27.054053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:49.173 [2024-07-15 15:15:27.054064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-15 15:15:27.055611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4779.009 ms, result 0 00:19:49.173 { 00:19:49.173 "name": "ftl0", 00:19:49.173 "uuid": "210f65b8-b18e-45ac-848f-3ef4c566f1e0" 00:19:49.173 } 00:19:49.173 15:15:27 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:49.173 15:15:27 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:19:49.173 15:15:27 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:49.173 15:15:27 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local i 00:19:49.173 15:15:27 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:49.173 15:15:27 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:49.173 15:15:27 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:49.430 15:15:27 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:49.430 [ 00:19:49.430 { 00:19:49.430 "name": "ftl0", 00:19:49.430 "aliases": [ 00:19:49.430 "210f65b8-b18e-45ac-848f-3ef4c566f1e0" 00:19:49.430 ], 00:19:49.430 "product_name": "FTL disk", 00:19:49.430 "block_size": 4096, 00:19:49.430 "num_blocks": 20971520, 00:19:49.430 "uuid": "210f65b8-b18e-45ac-848f-3ef4c566f1e0", 00:19:49.430 "assigned_rate_limits": { 00:19:49.430 "rw_ios_per_sec": 0, 00:19:49.430 "rw_mbytes_per_sec": 0, 00:19:49.430 "r_mbytes_per_sec": 0, 00:19:49.430 "w_mbytes_per_sec": 0 00:19:49.430 }, 00:19:49.430 "claimed": false, 00:19:49.430 "zoned": false, 00:19:49.430 "supported_io_types": { 00:19:49.430 "read": true, 00:19:49.430 "write": true, 00:19:49.430 "unmap": true, 00:19:49.430 "flush": true, 00:19:49.430 "reset": false, 00:19:49.430 "nvme_admin": false, 00:19:49.430 "nvme_io": false, 00:19:49.430 "nvme_io_md": false, 00:19:49.430 "write_zeroes": true, 00:19:49.430 "zcopy": false, 00:19:49.430 "get_zone_info": false, 00:19:49.430 "zone_management": false, 00:19:49.430 "zone_append": false, 00:19:49.430 "compare": false, 00:19:49.430 "compare_and_write": false, 00:19:49.430 "abort": false, 00:19:49.430 "seek_hole": false, 00:19:49.430 "seek_data": false, 00:19:49.430 "copy": false, 00:19:49.430 "nvme_iov_md": false 00:19:49.430 }, 00:19:49.430 "driver_specific": { 00:19:49.430 "ftl": { 00:19:49.430 "base_bdev": "1ab9dc4b-eafd-4134-8a96-74197eb134d1", 00:19:49.430 "cache": "nvc0n1p0" 00:19:49.430 } 00:19:49.430 } 00:19:49.430 } 00:19:49.430 ] 00:19:49.430 15:15:27 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # return 0 00:19:49.430 15:15:27 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:49.430 15:15:27 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:49.688 15:15:27 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:49.688 15:15:27 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:49.947 [2024-07-15 15:15:27.960963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.947 [2024-07-15 15:15:27.961046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:49.947 [2024-07-15 15:15:27.961068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:49.947 [2024-07-15 15:15:27.961077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.947 [2024-07-15 15:15:27.961148] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:49.947 [2024-07-15 15:15:27.965771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.947 [2024-07-15 15:15:27.965822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:49.947 [2024-07-15 15:15:27.965837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.612 ms 00:19:49.947 [2024-07-15 15:15:27.965848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.947 [2024-07-15 15:15:27.966986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.947 [2024-07-15 15:15:27.967024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:49.947 [2024-07-15 15:15:27.967035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:19:49.947 [2024-07-15 15:15:27.967051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.947 [2024-07-15 15:15:27.970095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.947 [2024-07-15 15:15:27.970123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:49.947 [2024-07-15 15:15:27.970132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.008 ms 00:19:49.947 [2024-07-15 15:15:27.970144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.947 [2024-07-15 15:15:27.976137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.947 [2024-07-15 15:15:27.976196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:49.947 [2024-07-15 15:15:27.976209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.946 ms 00:19:49.947 [2024-07-15 15:15:27.976219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.947 [2024-07-15 15:15:28.023510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.947 [2024-07-15 15:15:28.023597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:49.947 [2024-07-15 15:15:28.023614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.155 ms 00:19:49.947 [2024-07-15 15:15:28.023626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.947 [2024-07-15 15:15:28.052591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.947 [2024-07-15 15:15:28.052693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:49.947 [2024-07-15 15:15:28.052711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.890 ms 00:19:49.947 [2024-07-15 15:15:28.052723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.947 [2024-07-15 15:15:28.053211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.947 [2024-07-15 15:15:28.053241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:49.947 [2024-07-15 15:15:28.053252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:19:49.947 [2024-07-15 15:15:28.053264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.206 [2024-07-15 15:15:28.102888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.206 [2024-07-15 15:15:28.103013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:50.206 [2024-07-15 15:15:28.103034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.652 ms 00:19:50.206 [2024-07-15 15:15:28.103045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.206 [2024-07-15 15:15:28.152820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.206 [2024-07-15 15:15:28.152906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:50.206 [2024-07-15 15:15:28.152923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.742 ms 00:19:50.206 [2024-07-15 15:15:28.152934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.206 [2024-07-15 15:15:28.202316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.206 [2024-07-15 15:15:28.202457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:50.206 [2024-07-15 15:15:28.202476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.357 ms 00:19:50.206 [2024-07-15 15:15:28.202488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.206 [2024-07-15 15:15:28.250915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.206 [2024-07-15 15:15:28.251015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:50.206 [2024-07-15 15:15:28.251033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.195 ms 00:19:50.206 [2024-07-15 15:15:28.251044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.206 [2024-07-15 15:15:28.251174] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:50.206 [2024-07-15 15:15:28.251197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:50.206 [2024-07-15 15:15:28.251210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:50.206 [2024-07-15 15:15:28.251222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:50.206 [2024-07-15 15:15:28.251231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:50.206 [2024-07-15 15:15:28.251243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:50.206 [2024-07-15 15:15:28.251253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.251977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:50.207 [2024-07-15 15:15:28.252317] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:50.207 [2024-07-15 15:15:28.252326] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 210f65b8-b18e-45ac-848f-3ef4c566f1e0 00:19:50.208 [2024-07-15 15:15:28.252338] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:50.208 [2024-07-15 15:15:28.252350] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:50.208 [2024-07-15 15:15:28.252363] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:50.208 [2024-07-15 15:15:28.252374] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:50.208 [2024-07-15 15:15:28.252384] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:50.208 [2024-07-15 15:15:28.252393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:50.208 [2024-07-15 15:15:28.252405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:50.208 [2024-07-15 15:15:28.252412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:50.208 [2024-07-15 15:15:28.252422] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:50.208 [2024-07-15 15:15:28.252432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.208 [2024-07-15 15:15:28.252444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:50.208 [2024-07-15 15:15:28.252455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:19:50.208 [2024-07-15 15:15:28.252466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.208 [2024-07-15 15:15:28.277331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.208 [2024-07-15 15:15:28.277416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:50.208 [2024-07-15 15:15:28.277433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.768 ms 00:19:50.208 [2024-07-15 15:15:28.277444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.208 [2024-07-15 15:15:28.278111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.208 [2024-07-15 15:15:28.278143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:50.208 [2024-07-15 15:15:28.278155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:19:50.208 [2024-07-15 15:15:28.278166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.466 [2024-07-15 15:15:28.365637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.466 [2024-07-15 15:15:28.365707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.466 [2024-07-15 15:15:28.365722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.466 [2024-07-15 15:15:28.365734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.466 [2024-07-15 15:15:28.365860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.466 [2024-07-15 15:15:28.365873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.466 [2024-07-15 15:15:28.365882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.466 [2024-07-15 15:15:28.365893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.466 [2024-07-15 15:15:28.366083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.466 [2024-07-15 15:15:28.366107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.466 [2024-07-15 15:15:28.366118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.466 [2024-07-15 15:15:28.366129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.466 [2024-07-15 15:15:28.366171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.466 [2024-07-15 15:15:28.366186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.466 [2024-07-15 15:15:28.366195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.466 [2024-07-15 15:15:28.366206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.466 [2024-07-15 15:15:28.522142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.466 [2024-07-15 15:15:28.522218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.466 [2024-07-15 15:15:28.522233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.466 [2024-07-15 15:15:28.522245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.725 [2024-07-15 15:15:28.651851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.725 [2024-07-15 15:15:28.651937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.725 [2024-07-15 15:15:28.651970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.725 [2024-07-15 15:15:28.651982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.725 [2024-07-15 15:15:28.652154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.725 [2024-07-15 15:15:28.652173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.725 [2024-07-15 15:15:28.652183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.725 [2024-07-15 15:15:28.652194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.725 [2024-07-15 15:15:28.652317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.725 [2024-07-15 15:15:28.652341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.725 [2024-07-15 15:15:28.652351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.725 [2024-07-15 15:15:28.652362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.725 [2024-07-15 15:15:28.652526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.725 [2024-07-15 15:15:28.652551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.725 [2024-07-15 15:15:28.652563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.725 [2024-07-15 15:15:28.652575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.725 [2024-07-15 15:15:28.652667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.725 [2024-07-15 15:15:28.652683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:50.725 [2024-07-15 15:15:28.652694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.725 [2024-07-15 15:15:28.652705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.725 [2024-07-15 15:15:28.652800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.725 [2024-07-15 15:15:28.652825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.725 [2024-07-15 15:15:28.652838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.725 [2024-07-15 15:15:28.652849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.725 [2024-07-15 15:15:28.652928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.725 [2024-07-15 15:15:28.652948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.725 [2024-07-15 15:15:28.652958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.725 [2024-07-15 15:15:28.652969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.725 [2024-07-15 15:15:28.653270] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 693.644 ms, result 0 00:19:50.725 true 00:19:50.725 15:15:28 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 80122 00:19:50.725 15:15:28 ftl.ftl_fio_basic -- common/autotest_common.sh@948 -- # '[' -z 80122 ']' 00:19:50.725 15:15:28 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # kill -0 80122 00:19:50.725 15:15:28 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # uname 00:19:50.725 15:15:28 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.725 15:15:28 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80122 00:19:50.725 15:15:28 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:50.725 15:15:28 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:50.725 killing process with pid 80122 00:19:50.725 15:15:28 ftl.ftl_fio_basic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80122' 00:19:50.725 15:15:28 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # kill 80122 00:19:50.725 15:15:28 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # wait 80122 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:58.836 15:15:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:58.836 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:19:58.836 fio-3.35 00:19:58.836 Starting 1 thread 00:20:03.114 00:20:03.114 test: (groupid=0, jobs=1): err= 0: pid=80391: Mon Jul 15 15:15:41 2024 00:20:03.114 read: IOPS=1062, BW=70.6MiB/s (74.0MB/s)(255MiB/3607msec) 00:20:03.114 slat (nsec): min=5093, max=42848, avg=8028.13, stdev=3401.30 00:20:03.114 clat (usec): min=284, max=113985, avg=430.01, stdev=1835.60 00:20:03.114 lat (usec): min=297, max=113994, avg=438.04, stdev=1835.64 00:20:03.114 clat percentiles (usec): 00:20:03.114 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 338], 00:20:03.114 | 30.00th=[ 347], 40.00th=[ 392], 50.00th=[ 400], 60.00th=[ 408], 00:20:03.114 | 70.00th=[ 416], 80.00th=[ 461], 90.00th=[ 478], 95.00th=[ 494], 00:20:03.114 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[ 783], 99.95th=[ 922], 00:20:03.114 | 99.99th=[113771] 00:20:03.114 write: IOPS=1069, BW=71.0MiB/s (74.5MB/s)(256MiB/3604msec); 0 zone resets 00:20:03.114 slat (usec): min=17, max=129, avg=24.48, stdev= 8.28 00:20:03.114 clat (usec): min=319, max=1990, avg=462.48, stdev=72.38 00:20:03.114 lat (usec): min=341, max=2017, avg=486.96, stdev=72.78 00:20:03.114 clat percentiles (usec): 00:20:03.114 | 1.00th=[ 355], 5.00th=[ 363], 10.00th=[ 396], 20.00th=[ 424], 00:20:03.114 | 30.00th=[ 429], 40.00th=[ 433], 50.00th=[ 441], 60.00th=[ 478], 00:20:03.114 | 70.00th=[ 494], 80.00th=[ 502], 90.00th=[ 545], 95.00th=[ 570], 00:20:03.114 | 99.00th=[ 693], 99.50th=[ 750], 99.90th=[ 1012], 99.95th=[ 1680], 00:20:03.114 | 99.99th=[ 1991] 00:20:03.114 bw ( KiB/s): min=60520, max=80784, per=100.00%, avg=72757.29, stdev=6192.66, samples=7 00:20:03.114 iops : min= 890, max= 1188, avg=1069.71, stdev=91.08, samples=7 00:20:03.114 lat (usec) : 500=86.92%, 750=12.77%, 1000=0.25% 00:20:03.114 lat (msec) : 2=0.05%, 250=0.01% 00:20:03.114 cpu : usr=99.11%, sys=0.19%, ctx=6, majf=0, minf=1171 00:20:03.114 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.114 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.114 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:03.114 00:20:03.114 Run status group 0 (all jobs): 00:20:03.114 READ: bw=70.6MiB/s (74.0MB/s), 70.6MiB/s-70.6MiB/s (74.0MB/s-74.0MB/s), io=255MiB (267MB), run=3607-3607msec 00:20:03.114 WRITE: bw=71.0MiB/s (74.5MB/s), 71.0MiB/s-71.0MiB/s (74.5MB/s-74.5MB/s), io=256MiB (269MB), run=3604-3604msec 00:20:05.647 ----------------------------------------------------- 00:20:05.647 Suppressions used: 00:20:05.647 count bytes template 00:20:05.647 1 5 /usr/src/fio/parse.c 00:20:05.647 1 8 libtcmalloc_minimal.so 00:20:05.647 1 904 libcrypto.so 00:20:05.647 ----------------------------------------------------- 00:20:05.647 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:05.647 15:15:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:05.647 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:05.647 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:05.647 fio-3.35 00:20:05.647 Starting 2 threads 00:20:37.783 00:20:37.783 first_half: (groupid=0, jobs=1): err= 0: pid=80501: Mon Jul 15 15:16:12 2024 00:20:37.783 read: IOPS=2366, BW=9464KiB/s (9691kB/s)(255MiB/27603msec) 00:20:37.783 slat (usec): min=4, max=117, avg=13.64, stdev= 5.55 00:20:37.783 clat (usec): min=1205, max=362012, avg=41574.57, stdev=25146.57 00:20:37.783 lat (usec): min=1217, max=362023, avg=41588.21, stdev=25147.19 00:20:37.783 clat percentiles (msec): 00:20:37.783 | 1.00th=[ 19], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:20:37.783 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 37], 00:20:37.783 | 70.00th=[ 39], 80.00th=[ 41], 90.00th=[ 47], 95.00th=[ 63], 00:20:37.783 | 99.00th=[ 184], 99.50th=[ 211], 99.90th=[ 262], 99.95th=[ 305], 00:20:37.783 | 99.99th=[ 351] 00:20:37.783 write: IOPS=2810, BW=11.0MiB/s (11.5MB/s)(256MiB/23320msec); 0 zone resets 00:20:37.783 slat (usec): min=4, max=1372, avg=13.70, stdev=10.70 00:20:37.783 clat (usec): min=432, max=128993, avg=12423.99, stdev=20269.18 00:20:37.783 lat (usec): min=453, max=129003, avg=12437.69, stdev=20269.27 00:20:37.783 clat percentiles (usec): 00:20:37.783 | 1.00th=[ 1237], 5.00th=[ 1680], 10.00th=[ 1975], 20.00th=[ 2802], 00:20:37.783 | 30.00th=[ 4817], 40.00th=[ 6456], 50.00th=[ 7570], 60.00th=[ 8356], 00:20:37.783 | 70.00th=[ 9503], 80.00th=[ 12649], 90.00th=[ 16712], 95.00th=[ 79168], 00:20:37.783 | 99.00th=[101188], 99.50th=[105382], 99.90th=[121111], 99.95th=[125305], 00:20:37.783 | 99.99th=[127402] 00:20:37.783 bw ( KiB/s): min= 4144, max=40584, per=100.00%, avg=22795.13, stdev=11651.75, samples=23 00:20:37.783 iops : min= 1036, max=10146, avg=5698.78, stdev=2912.94, samples=23 00:20:37.783 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.11% 00:20:37.783 lat (msec) : 2=5.11%, 4=7.84%, 10=23.69%, 20=10.38%, 50=45.99% 00:20:37.783 lat (msec) : 100=4.68%, 250=2.07%, 500=0.06% 00:20:37.783 cpu : usr=99.08%, sys=0.24%, ctx=193, majf=0, minf=5543 00:20:37.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:37.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.783 complete : 0=0.0%, 4=99.0%, 8=1.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:37.783 issued rwts: total=65310,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:37.783 second_half: (groupid=0, jobs=1): err= 0: pid=80502: Mon Jul 15 15:16:12 2024 00:20:37.783 read: IOPS=2348, BW=9392KiB/s (9618kB/s)(255MiB/27814msec) 00:20:37.783 slat (nsec): min=3591, max=91636, avg=9001.36, stdev=3578.60 00:20:37.783 clat (usec): min=1046, max=374740, avg=41221.30, stdev=29175.81 00:20:37.783 lat (usec): min=1062, max=374765, avg=41230.30, stdev=29176.50 00:20:37.783 clat percentiles (msec): 00:20:37.784 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 35], 00:20:37.784 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 37], 00:20:37.784 | 70.00th=[ 38], 80.00th=[ 40], 90.00th=[ 42], 95.00th=[ 65], 00:20:37.784 | 99.00th=[ 211], 99.50th=[ 232], 99.90th=[ 268], 99.95th=[ 313], 00:20:37.784 | 99.99th=[ 368] 00:20:37.784 write: IOPS=2578, BW=10.1MiB/s (10.6MB/s)(256MiB/25421msec); 0 zone resets 00:20:37.784 slat (usec): min=5, max=429, avg=12.09, stdev= 7.38 00:20:37.784 clat (usec): min=440, max=129337, avg=13210.70, stdev=21655.88 00:20:37.784 lat (usec): min=461, max=129355, avg=13222.79, stdev=21656.53 00:20:37.784 clat percentiles (usec): 00:20:37.784 | 1.00th=[ 1123], 5.00th=[ 1450], 10.00th=[ 1680], 20.00th=[ 2008], 00:20:37.784 | 30.00th=[ 2573], 40.00th=[ 4752], 50.00th=[ 6259], 60.00th=[ 7635], 00:20:37.784 | 70.00th=[ 9503], 80.00th=[ 14353], 90.00th=[ 36963], 95.00th=[ 80217], 00:20:37.784 | 99.00th=[101188], 99.50th=[105382], 99.90th=[123208], 99.95th=[127402], 00:20:37.784 | 99.99th=[128451] 00:20:37.784 bw ( KiB/s): min= 944, max=47008, per=97.76%, avg=20162.35, stdev=12397.73, samples=26 00:20:37.784 iops : min= 236, max=11752, avg=5040.54, stdev=3099.46, samples=26 00:20:37.784 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.21% 00:20:37.784 lat (msec) : 2=9.74%, 4=7.71%, 10=19.56%, 20=8.83%, 50=47.96% 00:20:37.784 lat (msec) : 100=3.51%, 250=2.31%, 500=0.10% 00:20:37.784 cpu : usr=99.12%, sys=0.27%, ctx=68, majf=0, minf=5566 00:20:37.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:37.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.784 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:37.784 issued rwts: total=65309,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:37.784 00:20:37.784 Run status group 0 (all jobs): 00:20:37.784 READ: bw=18.3MiB/s (19.2MB/s), 9392KiB/s-9464KiB/s (9618kB/s-9691kB/s), io=510MiB (535MB), run=27603-27814msec 00:20:37.784 WRITE: bw=20.1MiB/s (21.1MB/s), 10.1MiB/s-11.0MiB/s (10.6MB/s-11.5MB/s), io=512MiB (537MB), run=23320-25421msec 00:20:37.784 ----------------------------------------------------- 00:20:37.784 Suppressions used: 00:20:37.784 count bytes template 00:20:37.784 2 10 /usr/src/fio/parse.c 00:20:37.784 4 384 /usr/src/fio/iolog.c 00:20:37.784 1 8 libtcmalloc_minimal.so 00:20:37.784 1 904 libcrypto.so 00:20:37.784 ----------------------------------------------------- 00:20:37.784 00:20:37.784 15:16:15 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:37.784 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:37.784 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:38.043 15:16:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:38.043 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:38.043 fio-3.35 00:20:38.043 Starting 1 thread 00:20:56.164 00:20:56.164 test: (groupid=0, jobs=1): err= 0: pid=80854: Mon Jul 15 15:16:31 2024 00:20:56.164 read: IOPS=7746, BW=30.3MiB/s (31.7MB/s)(255MiB/8417msec) 00:20:56.164 slat (nsec): min=3441, max=47915, avg=5844.93, stdev=1655.26 00:20:56.164 clat (usec): min=750, max=32393, avg=16513.82, stdev=1345.53 00:20:56.164 lat (usec): min=755, max=32400, avg=16519.67, stdev=1345.55 00:20:56.164 clat percentiles (usec): 00:20:56.164 | 1.00th=[15401], 5.00th=[15664], 10.00th=[15795], 20.00th=[15926], 00:20:56.164 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16319], 60.00th=[16450], 00:20:56.164 | 70.00th=[16581], 80.00th=[16909], 90.00th=[17171], 95.00th=[17695], 00:20:56.164 | 99.00th=[22676], 99.50th=[27395], 99.90th=[29230], 99.95th=[29754], 00:20:56.164 | 99.99th=[31851] 00:20:56.164 write: IOPS=11.9k, BW=46.6MiB/s (48.9MB/s)(256MiB/5491msec); 0 zone resets 00:20:56.164 slat (usec): min=4, max=753, avg= 9.32, stdev= 8.97 00:20:56.164 clat (usec): min=609, max=62627, avg=10670.91, stdev=13059.00 00:20:56.164 lat (usec): min=616, max=62636, avg=10680.23, stdev=13059.02 00:20:56.164 clat percentiles (usec): 00:20:56.164 | 1.00th=[ 1020], 5.00th=[ 1254], 10.00th=[ 1418], 20.00th=[ 1614], 00:20:56.164 | 30.00th=[ 1827], 40.00th=[ 2540], 50.00th=[ 7046], 60.00th=[ 8225], 00:20:56.164 | 70.00th=[ 9372], 80.00th=[11469], 90.00th=[36963], 95.00th=[39060], 00:20:56.164 | 99.00th=[53216], 99.50th=[55313], 99.90th=[57410], 99.95th=[57934], 00:20:56.164 | 99.99th=[58983] 00:20:56.164 bw ( KiB/s): min=38432, max=65128, per=99.84%, avg=47662.55, stdev=8489.88, samples=11 00:20:56.164 iops : min= 9608, max=16282, avg=11915.64, stdev=2122.47, samples=11 00:20:56.164 lat (usec) : 750=0.02%, 1000=0.41% 00:20:56.164 lat (msec) : 2=17.46%, 4=3.06%, 10=15.88%, 20=54.33%, 50=8.16% 00:20:56.164 lat (msec) : 100=0.68% 00:20:56.164 cpu : usr=98.93%, sys=0.38%, ctx=63, majf=0, minf=5567 00:20:56.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:56.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.164 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.164 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.164 00:20:56.164 Run status group 0 (all jobs): 00:20:56.164 READ: bw=30.3MiB/s (31.7MB/s), 30.3MiB/s-30.3MiB/s (31.7MB/s-31.7MB/s), io=255MiB (267MB), run=8417-8417msec 00:20:56.164 WRITE: bw=46.6MiB/s (48.9MB/s), 46.6MiB/s-46.6MiB/s (48.9MB/s-48.9MB/s), io=256MiB (268MB), run=5491-5491msec 00:20:56.164 ----------------------------------------------------- 00:20:56.164 Suppressions used: 00:20:56.164 count bytes template 00:20:56.164 1 5 /usr/src/fio/parse.c 00:20:56.164 2 192 /usr/src/fio/iolog.c 00:20:56.164 1 8 libtcmalloc_minimal.so 00:20:56.164 1 904 libcrypto.so 00:20:56.164 ----------------------------------------------------- 00:20:56.164 00:20:56.164 15:16:33 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:20:56.164 15:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:56.164 15:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:56.164 15:16:33 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:56.164 15:16:33 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:20:56.164 Remove shared memory files 00:20:56.164 15:16:33 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:56.164 15:16:33 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:20:56.164 15:16:33 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:20:56.164 15:16:33 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62299 /dev/shm/spdk_tgt_trace.pid79037 00:20:56.164 15:16:33 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:56.164 15:16:33 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:20:56.164 00:20:56.164 real 1m15.971s 00:20:56.164 user 2m48.685s 00:20:56.164 sys 0m3.629s 00:20:56.164 15:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:56.164 15:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:56.164 ************************************ 00:20:56.164 END TEST ftl_fio_basic 00:20:56.164 ************************************ 00:20:56.164 15:16:33 ftl -- common/autotest_common.sh@1142 -- # return 0 00:20:56.164 15:16:33 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:56.164 15:16:33 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:56.164 15:16:33 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:56.164 15:16:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:56.164 ************************************ 00:20:56.164 START TEST ftl_bdevperf 00:20:56.164 ************************************ 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:56.164 * Looking for test storage... 00:20:56.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=81093 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 81093 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 81093 ']' 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:56.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:56.164 15:16:33 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:56.164 [2024-07-15 15:16:33.898156] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:20:56.164 [2024-07-15 15:16:33.898290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81093 ] 00:20:56.164 [2024-07-15 15:16:34.061082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.423 [2024-07-15 15:16:34.306692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.725 15:16:34 ftl.ftl_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.725 15:16:34 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:20:56.725 15:16:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:56.725 15:16:34 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:20:56.725 15:16:34 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:56.725 15:16:34 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:20:56.725 15:16:34 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:20:56.725 15:16:34 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:56.983 15:16:34 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:56.983 15:16:34 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:20:56.983 15:16:34 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:56.983 15:16:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:20:56.983 15:16:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:56.983 15:16:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:20:56.983 15:16:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:20:56.983 15:16:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:57.242 15:16:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:57.242 { 00:20:57.242 "name": "nvme0n1", 00:20:57.242 "aliases": [ 00:20:57.242 "1adb61e1-697f-4713-ac11-2267cfd5d283" 00:20:57.242 ], 00:20:57.242 "product_name": "NVMe disk", 00:20:57.242 "block_size": 4096, 00:20:57.242 "num_blocks": 1310720, 00:20:57.242 "uuid": "1adb61e1-697f-4713-ac11-2267cfd5d283", 00:20:57.242 "assigned_rate_limits": { 00:20:57.242 "rw_ios_per_sec": 0, 00:20:57.242 "rw_mbytes_per_sec": 0, 00:20:57.242 "r_mbytes_per_sec": 0, 00:20:57.242 "w_mbytes_per_sec": 0 00:20:57.242 }, 00:20:57.242 "claimed": true, 00:20:57.242 "claim_type": "read_many_write_one", 00:20:57.242 "zoned": false, 00:20:57.242 "supported_io_types": { 00:20:57.242 "read": true, 00:20:57.242 "write": true, 00:20:57.242 "unmap": true, 00:20:57.242 "flush": true, 00:20:57.242 "reset": true, 00:20:57.242 "nvme_admin": true, 00:20:57.242 "nvme_io": true, 00:20:57.242 "nvme_io_md": false, 00:20:57.242 "write_zeroes": true, 00:20:57.242 "zcopy": false, 00:20:57.242 "get_zone_info": false, 00:20:57.242 "zone_management": false, 00:20:57.242 "zone_append": false, 00:20:57.242 "compare": true, 00:20:57.242 "compare_and_write": false, 00:20:57.242 "abort": true, 00:20:57.242 "seek_hole": false, 00:20:57.242 "seek_data": false, 00:20:57.242 "copy": true, 00:20:57.242 "nvme_iov_md": false 00:20:57.242 }, 00:20:57.242 "driver_specific": { 00:20:57.242 "nvme": [ 00:20:57.242 { 00:20:57.242 "pci_address": "0000:00:11.0", 00:20:57.242 "trid": { 00:20:57.242 "trtype": "PCIe", 00:20:57.242 "traddr": "0000:00:11.0" 00:20:57.242 }, 00:20:57.242 "ctrlr_data": { 00:20:57.242 "cntlid": 0, 00:20:57.242 "vendor_id": "0x1b36", 00:20:57.242 "model_number": "QEMU NVMe Ctrl", 00:20:57.242 "serial_number": "12341", 00:20:57.242 "firmware_revision": "8.0.0", 00:20:57.242 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:57.242 "oacs": { 00:20:57.242 "security": 0, 00:20:57.242 "format": 1, 00:20:57.242 "firmware": 0, 00:20:57.242 "ns_manage": 1 00:20:57.242 }, 00:20:57.242 "multi_ctrlr": false, 00:20:57.242 "ana_reporting": false 00:20:57.242 }, 00:20:57.242 "vs": { 00:20:57.242 "nvme_version": "1.4" 00:20:57.242 }, 00:20:57.242 "ns_data": { 00:20:57.242 "id": 1, 00:20:57.242 "can_share": false 00:20:57.242 } 00:20:57.242 } 00:20:57.242 ], 00:20:57.242 "mp_policy": "active_passive" 00:20:57.242 } 00:20:57.242 } 00:20:57.242 ]' 00:20:57.242 15:16:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:57.242 15:16:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:20:57.242 15:16:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:57.242 15:16:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:20:57.242 15:16:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:20:57.242 15:16:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:20:57.242 15:16:35 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:20:57.242 15:16:35 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:57.242 15:16:35 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:20:57.242 15:16:35 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:57.242 15:16:35 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:57.500 15:16:35 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=c81370f5-d6c3-46f8-8faf-eae9fd58e642 00:20:57.500 15:16:35 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:20:57.500 15:16:35 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c81370f5-d6c3-46f8-8faf-eae9fd58e642 00:20:57.759 15:16:35 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:58.017 15:16:35 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f7fd0b7d-f17c-4cf8-9ebf-4f739f154621 00:20:58.017 15:16:35 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f7fd0b7d-f17c-4cf8-9ebf-4f739f154621 00:20:58.017 15:16:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=365ed0ad-d03e-4219-8d31-748437b96c0d 00:20:58.017 15:16:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 365ed0ad-d03e-4219-8d31-748437b96c0d 00:20:58.017 15:16:36 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:20:58.017 15:16:36 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:58.017 15:16:36 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=365ed0ad-d03e-4219-8d31-748437b96c0d 00:20:58.017 15:16:36 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:20:58.017 15:16:36 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 365ed0ad-d03e-4219-8d31-748437b96c0d 00:20:58.017 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=365ed0ad-d03e-4219-8d31-748437b96c0d 00:20:58.017 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:58.017 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:20:58.017 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:20:58.017 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 365ed0ad-d03e-4219-8d31-748437b96c0d 00:20:58.274 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:58.274 { 00:20:58.274 "name": "365ed0ad-d03e-4219-8d31-748437b96c0d", 00:20:58.274 "aliases": [ 00:20:58.274 "lvs/nvme0n1p0" 00:20:58.274 ], 00:20:58.275 "product_name": "Logical Volume", 00:20:58.275 "block_size": 4096, 00:20:58.275 "num_blocks": 26476544, 00:20:58.275 "uuid": "365ed0ad-d03e-4219-8d31-748437b96c0d", 00:20:58.275 "assigned_rate_limits": { 00:20:58.275 "rw_ios_per_sec": 0, 00:20:58.275 "rw_mbytes_per_sec": 0, 00:20:58.275 "r_mbytes_per_sec": 0, 00:20:58.275 "w_mbytes_per_sec": 0 00:20:58.275 }, 00:20:58.275 "claimed": false, 00:20:58.275 "zoned": false, 00:20:58.275 "supported_io_types": { 00:20:58.275 "read": true, 00:20:58.275 "write": true, 00:20:58.275 "unmap": true, 00:20:58.275 "flush": false, 00:20:58.275 "reset": true, 00:20:58.275 "nvme_admin": false, 00:20:58.275 "nvme_io": false, 00:20:58.275 "nvme_io_md": false, 00:20:58.275 "write_zeroes": true, 00:20:58.275 "zcopy": false, 00:20:58.275 "get_zone_info": false, 00:20:58.275 "zone_management": false, 00:20:58.275 "zone_append": false, 00:20:58.275 "compare": false, 00:20:58.275 "compare_and_write": false, 00:20:58.275 "abort": false, 00:20:58.275 "seek_hole": true, 00:20:58.275 "seek_data": true, 00:20:58.275 "copy": false, 00:20:58.275 "nvme_iov_md": false 00:20:58.275 }, 00:20:58.275 "driver_specific": { 00:20:58.275 "lvol": { 00:20:58.275 "lvol_store_uuid": "f7fd0b7d-f17c-4cf8-9ebf-4f739f154621", 00:20:58.275 "base_bdev": "nvme0n1", 00:20:58.275 "thin_provision": true, 00:20:58.275 "num_allocated_clusters": 0, 00:20:58.275 "snapshot": false, 00:20:58.275 "clone": false, 00:20:58.275 "esnap_clone": false 00:20:58.275 } 00:20:58.275 } 00:20:58.275 } 00:20:58.275 ]' 00:20:58.275 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:58.275 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:20:58.275 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:58.533 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:58.533 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:58.533 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:20:58.533 15:16:36 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:20:58.533 15:16:36 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:20:58.533 15:16:36 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:58.790 15:16:36 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:58.790 15:16:36 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:58.790 15:16:36 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 365ed0ad-d03e-4219-8d31-748437b96c0d 00:20:58.790 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=365ed0ad-d03e-4219-8d31-748437b96c0d 00:20:58.790 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:58.790 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:20:58.790 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:20:58.790 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 365ed0ad-d03e-4219-8d31-748437b96c0d 00:20:58.790 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:58.790 { 00:20:58.790 "name": "365ed0ad-d03e-4219-8d31-748437b96c0d", 00:20:58.790 "aliases": [ 00:20:58.790 "lvs/nvme0n1p0" 00:20:58.790 ], 00:20:58.790 "product_name": "Logical Volume", 00:20:58.790 "block_size": 4096, 00:20:58.790 "num_blocks": 26476544, 00:20:58.790 "uuid": "365ed0ad-d03e-4219-8d31-748437b96c0d", 00:20:58.790 "assigned_rate_limits": { 00:20:58.790 "rw_ios_per_sec": 0, 00:20:58.790 "rw_mbytes_per_sec": 0, 00:20:58.790 "r_mbytes_per_sec": 0, 00:20:58.790 "w_mbytes_per_sec": 0 00:20:58.790 }, 00:20:58.790 "claimed": false, 00:20:58.790 "zoned": false, 00:20:58.790 "supported_io_types": { 00:20:58.790 "read": true, 00:20:58.790 "write": true, 00:20:58.791 "unmap": true, 00:20:58.791 "flush": false, 00:20:58.791 "reset": true, 00:20:58.791 "nvme_admin": false, 00:20:58.791 "nvme_io": false, 00:20:58.791 "nvme_io_md": false, 00:20:58.791 "write_zeroes": true, 00:20:58.791 "zcopy": false, 00:20:58.791 "get_zone_info": false, 00:20:58.791 "zone_management": false, 00:20:58.791 "zone_append": false, 00:20:58.791 "compare": false, 00:20:58.791 "compare_and_write": false, 00:20:58.791 "abort": false, 00:20:58.791 "seek_hole": true, 00:20:58.791 "seek_data": true, 00:20:58.791 "copy": false, 00:20:58.791 "nvme_iov_md": false 00:20:58.791 }, 00:20:58.791 "driver_specific": { 00:20:58.791 "lvol": { 00:20:58.791 "lvol_store_uuid": "f7fd0b7d-f17c-4cf8-9ebf-4f739f154621", 00:20:58.791 "base_bdev": "nvme0n1", 00:20:58.791 "thin_provision": true, 00:20:58.791 "num_allocated_clusters": 0, 00:20:58.791 "snapshot": false, 00:20:58.791 "clone": false, 00:20:58.791 "esnap_clone": false 00:20:58.791 } 00:20:58.791 } 00:20:58.791 } 00:20:58.791 ]' 00:20:58.791 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:58.791 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:20:58.791 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:59.049 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:59.049 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:59.049 15:16:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:20:59.049 15:16:36 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:20:59.049 15:16:36 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:59.049 15:16:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:20:59.049 15:16:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 365ed0ad-d03e-4219-8d31-748437b96c0d 00:20:59.049 15:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=365ed0ad-d03e-4219-8d31-748437b96c0d 00:20:59.049 15:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:59.049 15:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:20:59.049 15:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:20:59.049 15:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 365ed0ad-d03e-4219-8d31-748437b96c0d 00:20:59.308 15:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:59.308 { 00:20:59.308 "name": "365ed0ad-d03e-4219-8d31-748437b96c0d", 00:20:59.308 "aliases": [ 00:20:59.308 "lvs/nvme0n1p0" 00:20:59.308 ], 00:20:59.308 "product_name": "Logical Volume", 00:20:59.308 "block_size": 4096, 00:20:59.308 "num_blocks": 26476544, 00:20:59.308 "uuid": "365ed0ad-d03e-4219-8d31-748437b96c0d", 00:20:59.308 "assigned_rate_limits": { 00:20:59.308 "rw_ios_per_sec": 0, 00:20:59.308 "rw_mbytes_per_sec": 0, 00:20:59.308 "r_mbytes_per_sec": 0, 00:20:59.308 "w_mbytes_per_sec": 0 00:20:59.308 }, 00:20:59.308 "claimed": false, 00:20:59.308 "zoned": false, 00:20:59.308 "supported_io_types": { 00:20:59.308 "read": true, 00:20:59.308 "write": true, 00:20:59.308 "unmap": true, 00:20:59.308 "flush": false, 00:20:59.308 "reset": true, 00:20:59.308 "nvme_admin": false, 00:20:59.308 "nvme_io": false, 00:20:59.308 "nvme_io_md": false, 00:20:59.308 "write_zeroes": true, 00:20:59.308 "zcopy": false, 00:20:59.308 "get_zone_info": false, 00:20:59.308 "zone_management": false, 00:20:59.308 "zone_append": false, 00:20:59.308 "compare": false, 00:20:59.308 "compare_and_write": false, 00:20:59.308 "abort": false, 00:20:59.308 "seek_hole": true, 00:20:59.308 "seek_data": true, 00:20:59.308 "copy": false, 00:20:59.308 "nvme_iov_md": false 00:20:59.308 }, 00:20:59.308 "driver_specific": { 00:20:59.308 "lvol": { 00:20:59.308 "lvol_store_uuid": "f7fd0b7d-f17c-4cf8-9ebf-4f739f154621", 00:20:59.308 "base_bdev": "nvme0n1", 00:20:59.308 "thin_provision": true, 00:20:59.308 "num_allocated_clusters": 0, 00:20:59.308 "snapshot": false, 00:20:59.308 "clone": false, 00:20:59.308 "esnap_clone": false 00:20:59.308 } 00:20:59.308 } 00:20:59.308 } 00:20:59.308 ]' 00:20:59.308 15:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:59.308 15:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:20:59.308 15:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:59.590 15:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:59.590 15:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:59.590 15:16:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:20:59.590 15:16:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:20:59.590 15:16:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 365ed0ad-d03e-4219-8d31-748437b96c0d -c nvc0n1p0 --l2p_dram_limit 20 00:20:59.590 [2024-07-15 15:16:37.610943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.590 [2024-07-15 15:16:37.611003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:59.590 [2024-07-15 15:16:37.611020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:59.590 [2024-07-15 15:16:37.611028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.590 [2024-07-15 15:16:37.611090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.590 [2024-07-15 15:16:37.611100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:59.590 [2024-07-15 15:16:37.611110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:59.590 [2024-07-15 15:16:37.611120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.590 [2024-07-15 15:16:37.611138] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:59.590 [2024-07-15 15:16:37.612350] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:59.590 [2024-07-15 15:16:37.612374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.590 [2024-07-15 15:16:37.612384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:59.590 [2024-07-15 15:16:37.612395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:20:59.590 [2024-07-15 15:16:37.612402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.590 [2024-07-15 15:16:37.612435] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 50c39ba2-bdfc-449c-b0d9-d745c8a98f1e 00:20:59.590 [2024-07-15 15:16:37.613864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.590 [2024-07-15 15:16:37.613898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:59.590 [2024-07-15 15:16:37.613909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:59.590 [2024-07-15 15:16:37.613922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.590 [2024-07-15 15:16:37.621502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.590 [2024-07-15 15:16:37.621541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:59.590 [2024-07-15 15:16:37.621551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.508 ms 00:20:59.590 [2024-07-15 15:16:37.621562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.590 [2024-07-15 15:16:37.621659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.590 [2024-07-15 15:16:37.621680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:59.590 [2024-07-15 15:16:37.621688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:59.590 [2024-07-15 15:16:37.621700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.590 [2024-07-15 15:16:37.621761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.590 [2024-07-15 15:16:37.621772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:59.590 [2024-07-15 15:16:37.621780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:59.590 [2024-07-15 15:16:37.621790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.590 [2024-07-15 15:16:37.621812] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:59.590 [2024-07-15 15:16:37.628374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.590 [2024-07-15 15:16:37.628411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:59.590 [2024-07-15 15:16:37.628427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.579 ms 00:20:59.590 [2024-07-15 15:16:37.628436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.590 [2024-07-15 15:16:37.628480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.590 [2024-07-15 15:16:37.628490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:59.590 [2024-07-15 15:16:37.628501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:59.590 [2024-07-15 15:16:37.628509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.590 [2024-07-15 15:16:37.628560] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:59.590 [2024-07-15 15:16:37.628714] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:59.590 [2024-07-15 15:16:37.628731] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:59.590 [2024-07-15 15:16:37.628744] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:59.590 [2024-07-15 15:16:37.628757] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:59.590 [2024-07-15 15:16:37.628768] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:59.590 [2024-07-15 15:16:37.628779] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:59.590 [2024-07-15 15:16:37.628787] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:59.590 [2024-07-15 15:16:37.628799] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:59.590 [2024-07-15 15:16:37.628807] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:59.590 [2024-07-15 15:16:37.628818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.590 [2024-07-15 15:16:37.628827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:59.590 [2024-07-15 15:16:37.628841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:20:59.590 [2024-07-15 15:16:37.628849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.590 [2024-07-15 15:16:37.628933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.590 [2024-07-15 15:16:37.628942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:59.590 [2024-07-15 15:16:37.628953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:59.590 [2024-07-15 15:16:37.628962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.590 [2024-07-15 15:16:37.629070] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:59.590 [2024-07-15 15:16:37.629082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:59.590 [2024-07-15 15:16:37.629094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:59.590 [2024-07-15 15:16:37.629105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.590 [2024-07-15 15:16:37.629116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:59.590 [2024-07-15 15:16:37.629124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:59.590 [2024-07-15 15:16:37.629134] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:59.590 [2024-07-15 15:16:37.629142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:59.590 [2024-07-15 15:16:37.629152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:59.590 [2024-07-15 15:16:37.629160] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:59.590 [2024-07-15 15:16:37.629170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:59.590 [2024-07-15 15:16:37.629178] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:59.590 [2024-07-15 15:16:37.629187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:59.590 [2024-07-15 15:16:37.629195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:59.590 [2024-07-15 15:16:37.629206] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:59.590 [2024-07-15 15:16:37.629215] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.590 [2024-07-15 15:16:37.629227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:59.590 [2024-07-15 15:16:37.629235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:59.590 [2024-07-15 15:16:37.629261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.590 [2024-07-15 15:16:37.629269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:59.590 [2024-07-15 15:16:37.629279] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:59.590 [2024-07-15 15:16:37.629287] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:59.591 [2024-07-15 15:16:37.629296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:59.591 [2024-07-15 15:16:37.629305] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:59.591 [2024-07-15 15:16:37.629314] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:59.591 [2024-07-15 15:16:37.629322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:59.591 [2024-07-15 15:16:37.629332] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:59.591 [2024-07-15 15:16:37.629339] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:59.591 [2024-07-15 15:16:37.629348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:59.591 [2024-07-15 15:16:37.629356] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:59.591 [2024-07-15 15:16:37.629366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:59.591 [2024-07-15 15:16:37.629376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:59.591 [2024-07-15 15:16:37.629395] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:59.591 [2024-07-15 15:16:37.629407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:59.591 [2024-07-15 15:16:37.629419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:59.591 [2024-07-15 15:16:37.629427] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:59.591 [2024-07-15 15:16:37.629438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:59.591 [2024-07-15 15:16:37.629446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:59.591 [2024-07-15 15:16:37.629457] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:59.591 [2024-07-15 15:16:37.629465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.591 [2024-07-15 15:16:37.629474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:59.591 [2024-07-15 15:16:37.629482] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:59.591 [2024-07-15 15:16:37.629492] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.591 [2024-07-15 15:16:37.629499] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:59.591 [2024-07-15 15:16:37.629510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:59.591 [2024-07-15 15:16:37.629519] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:59.591 [2024-07-15 15:16:37.629529] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.591 [2024-07-15 15:16:37.629538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:59.591 [2024-07-15 15:16:37.629551] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:59.591 [2024-07-15 15:16:37.629559] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:59.591 [2024-07-15 15:16:37.629568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:59.591 [2024-07-15 15:16:37.629576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:59.591 [2024-07-15 15:16:37.629586] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:59.591 [2024-07-15 15:16:37.629599] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:59.591 [2024-07-15 15:16:37.629611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:59.591 [2024-07-15 15:16:37.629621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:59.591 [2024-07-15 15:16:37.629632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:59.591 [2024-07-15 15:16:37.629641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:59.591 [2024-07-15 15:16:37.629652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:59.591 [2024-07-15 15:16:37.629661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:59.591 [2024-07-15 15:16:37.629671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:59.591 [2024-07-15 15:16:37.629679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:59.591 [2024-07-15 15:16:37.629690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:59.591 [2024-07-15 15:16:37.629698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:59.591 [2024-07-15 15:16:37.629713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:59.591 [2024-07-15 15:16:37.629722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:59.591 [2024-07-15 15:16:37.629732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:59.591 [2024-07-15 15:16:37.629740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:59.591 [2024-07-15 15:16:37.629751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:59.591 [2024-07-15 15:16:37.629758] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:59.591 [2024-07-15 15:16:37.629770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:59.591 [2024-07-15 15:16:37.629779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:59.591 [2024-07-15 15:16:37.629790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:59.591 [2024-07-15 15:16:37.629799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:59.591 [2024-07-15 15:16:37.629809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:59.591 [2024-07-15 15:16:37.629818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.591 [2024-07-15 15:16:37.629831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:59.591 [2024-07-15 15:16:37.629841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:20:59.591 [2024-07-15 15:16:37.629851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.591 [2024-07-15 15:16:37.629896] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:59.591 [2024-07-15 15:16:37.629914] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:02.872 [2024-07-15 15:16:40.670724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.872 [2024-07-15 15:16:40.670794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:02.872 [2024-07-15 15:16:40.670816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3046.688 ms 00:21:02.872 [2024-07-15 15:16:40.670827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.872 [2024-07-15 15:16:40.726036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.872 [2024-07-15 15:16:40.726108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:02.872 [2024-07-15 15:16:40.726122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.983 ms 00:21:02.872 [2024-07-15 15:16:40.726132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.872 [2024-07-15 15:16:40.726284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.872 [2024-07-15 15:16:40.726296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:02.872 [2024-07-15 15:16:40.726306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:02.872 [2024-07-15 15:16:40.726316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.872 [2024-07-15 15:16:40.776457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.872 [2024-07-15 15:16:40.776499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.872 [2024-07-15 15:16:40.776527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.205 ms 00:21:02.872 [2024-07-15 15:16:40.776537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.872 [2024-07-15 15:16:40.776576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.872 [2024-07-15 15:16:40.776590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.872 [2024-07-15 15:16:40.776599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:02.872 [2024-07-15 15:16:40.776608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.872 [2024-07-15 15:16:40.777112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.872 [2024-07-15 15:16:40.777130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.872 [2024-07-15 15:16:40.777139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:21:02.872 [2024-07-15 15:16:40.777149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.872 [2024-07-15 15:16:40.777272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.872 [2024-07-15 15:16:40.777292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.872 [2024-07-15 15:16:40.777301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:21:02.872 [2024-07-15 15:16:40.777317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.872 [2024-07-15 15:16:40.797893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.872 [2024-07-15 15:16:40.797927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.872 [2024-07-15 15:16:40.797938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.595 ms 00:21:02.872 [2024-07-15 15:16:40.797946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.872 [2024-07-15 15:16:40.811524] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:21:02.872 [2024-07-15 15:16:40.817435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.872 [2024-07-15 15:16:40.817461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:02.872 [2024-07-15 15:16:40.817473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.431 ms 00:21:02.872 [2024-07-15 15:16:40.817497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.872 [2024-07-15 15:16:40.909594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.872 [2024-07-15 15:16:40.909663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:02.872 [2024-07-15 15:16:40.909679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.237 ms 00:21:02.872 [2024-07-15 15:16:40.909687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.872 [2024-07-15 15:16:40.909865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.872 [2024-07-15 15:16:40.909875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:02.872 [2024-07-15 15:16:40.909888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:21:02.872 [2024-07-15 15:16:40.909897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.872 [2024-07-15 15:16:40.950131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.872 [2024-07-15 15:16:40.950197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:02.872 [2024-07-15 15:16:40.950215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.265 ms 00:21:02.872 [2024-07-15 15:16:40.950223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.134 [2024-07-15 15:16:40.994335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.134 [2024-07-15 15:16:40.994396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:03.134 [2024-07-15 15:16:40.994420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.148 ms 00:21:03.134 [2024-07-15 15:16:40.994429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.134 [2024-07-15 15:16:40.995374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.134 [2024-07-15 15:16:40.995399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:03.134 [2024-07-15 15:16:40.995414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:21:03.134 [2024-07-15 15:16:40.995423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.134 [2024-07-15 15:16:41.115246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.134 [2024-07-15 15:16:41.115300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:03.134 [2024-07-15 15:16:41.115321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.980 ms 00:21:03.134 [2024-07-15 15:16:41.115329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.134 [2024-07-15 15:16:41.156925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.134 [2024-07-15 15:16:41.157001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:03.134 [2024-07-15 15:16:41.157020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.629 ms 00:21:03.134 [2024-07-15 15:16:41.157030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.134 [2024-07-15 15:16:41.196466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.134 [2024-07-15 15:16:41.196509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:03.134 [2024-07-15 15:16:41.196524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.470 ms 00:21:03.134 [2024-07-15 15:16:41.196531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.134 [2024-07-15 15:16:41.237430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.134 [2024-07-15 15:16:41.237479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:03.134 [2024-07-15 15:16:41.237497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.934 ms 00:21:03.134 [2024-07-15 15:16:41.237506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.134 [2024-07-15 15:16:41.237569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.134 [2024-07-15 15:16:41.237580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:03.134 [2024-07-15 15:16:41.237596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:03.134 [2024-07-15 15:16:41.237605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.134 [2024-07-15 15:16:41.237711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.134 [2024-07-15 15:16:41.237723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:03.134 [2024-07-15 15:16:41.237733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:03.134 [2024-07-15 15:16:41.237741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.134 [2024-07-15 15:16:41.238938] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3634.449 ms, result 0 00:21:03.134 { 00:21:03.134 "name": "ftl0", 00:21:03.134 "uuid": "50c39ba2-bdfc-449c-b0d9-d745c8a98f1e" 00:21:03.134 } 00:21:03.403 15:16:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:21:03.403 15:16:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:21:03.403 15:16:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:21:03.403 15:16:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:21:03.661 [2024-07-15 15:16:41.522926] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:03.661 I/O size of 69632 is greater than zero copy threshold (65536). 00:21:03.661 Zero copy mechanism will not be used. 00:21:03.661 Running I/O for 4 seconds... 00:21:07.898 00:21:07.898 Latency(us) 00:21:07.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.898 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:21:07.898 ftl0 : 4.00 1967.03 130.62 0.00 0.00 531.46 194.96 944.41 00:21:07.898 =================================================================================================================== 00:21:07.898 Total : 1967.03 130.62 0.00 0.00 531.46 194.96 944.41 00:21:07.898 [2024-07-15 15:16:45.525599] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:07.898 0 00:21:07.898 15:16:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:21:07.898 [2024-07-15 15:16:45.638962] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:07.898 Running I/O for 4 seconds... 00:21:12.087 00:21:12.087 Latency(us) 00:21:12.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.087 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:21:12.087 ftl0 : 4.01 9932.89 38.80 0.00 0.00 12859.97 266.51 37776.21 00:21:12.087 =================================================================================================================== 00:21:12.087 Total : 9932.89 38.80 0.00 0.00 12859.97 0.00 37776.21 00:21:12.087 0 00:21:12.087 [2024-07-15 15:16:49.655954] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:12.087 15:16:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:21:12.087 [2024-07-15 15:16:49.798861] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:12.087 Running I/O for 4 seconds... 00:21:16.275 00:21:16.275 Latency(us) 00:21:16.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.275 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:16.275 Verification LBA range: start 0x0 length 0x1400000 00:21:16.275 ftl0 : 4.01 7977.86 31.16 0.00 0.00 15993.89 275.45 35715.69 00:21:16.275 =================================================================================================================== 00:21:16.275 Total : 7977.86 31.16 0.00 0.00 15993.89 0.00 35715.69 00:21:16.275 [2024-07-15 15:16:53.819465] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:16.275 0 00:21:16.275 15:16:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:21:16.275 [2024-07-15 15:16:54.010907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.275 [2024-07-15 15:16:54.010967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:16.275 [2024-07-15 15:16:54.010986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:16.275 [2024-07-15 15:16:54.011009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.275 [2024-07-15 15:16:54.011036] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:16.275 [2024-07-15 15:16:54.014865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.275 [2024-07-15 15:16:54.014899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:16.275 [2024-07-15 15:16:54.014909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.820 ms 00:21:16.275 [2024-07-15 15:16:54.014918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.275 [2024-07-15 15:16:54.016796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.275 [2024-07-15 15:16:54.016838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:16.275 [2024-07-15 15:16:54.016848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:21:16.275 [2024-07-15 15:16:54.016858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.275 [2024-07-15 15:16:54.230917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.275 [2024-07-15 15:16:54.231015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:16.275 [2024-07-15 15:16:54.231032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 214.444 ms 00:21:16.275 [2024-07-15 15:16:54.231048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.275 [2024-07-15 15:16:54.236356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.276 [2024-07-15 15:16:54.236395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:16.276 [2024-07-15 15:16:54.236406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.278 ms 00:21:16.276 [2024-07-15 15:16:54.236415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.276 [2024-07-15 15:16:54.276101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.276 [2024-07-15 15:16:54.276165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:16.276 [2024-07-15 15:16:54.276178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.688 ms 00:21:16.276 [2024-07-15 15:16:54.276187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.276 [2024-07-15 15:16:54.300793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.276 [2024-07-15 15:16:54.300869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:16.276 [2024-07-15 15:16:54.300887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.601 ms 00:21:16.276 [2024-07-15 15:16:54.300896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.276 [2024-07-15 15:16:54.301067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.276 [2024-07-15 15:16:54.301083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:16.276 [2024-07-15 15:16:54.301092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:21:16.276 [2024-07-15 15:16:54.301103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.276 [2024-07-15 15:16:54.341715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.276 [2024-07-15 15:16:54.341763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:16.276 [2024-07-15 15:16:54.341775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.674 ms 00:21:16.276 [2024-07-15 15:16:54.341801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.276 [2024-07-15 15:16:54.379345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.276 [2024-07-15 15:16:54.379388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:16.276 [2024-07-15 15:16:54.379415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.577 ms 00:21:16.276 [2024-07-15 15:16:54.379425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.536 [2024-07-15 15:16:54.417020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.536 [2024-07-15 15:16:54.417063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:16.536 [2024-07-15 15:16:54.417075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.631 ms 00:21:16.536 [2024-07-15 15:16:54.417083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.536 [2024-07-15 15:16:54.455139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.536 [2024-07-15 15:16:54.455187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:16.536 [2024-07-15 15:16:54.455198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.029 ms 00:21:16.536 [2024-07-15 15:16:54.455210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.536 [2024-07-15 15:16:54.455267] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:16.536 [2024-07-15 15:16:54.455286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.455981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.456003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.456016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.456025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.456035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.456043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.456054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.456062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.456072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.456080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.456093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.456101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.456111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:16.536 [2024-07-15 15:16:54.456120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:16.537 [2024-07-15 15:16:54.456320] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:16.537 [2024-07-15 15:16:54.456327] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 50c39ba2-bdfc-449c-b0d9-d745c8a98f1e 00:21:16.537 [2024-07-15 15:16:54.456337] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:16.537 [2024-07-15 15:16:54.456345] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:16.537 [2024-07-15 15:16:54.456356] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:16.537 [2024-07-15 15:16:54.456363] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:16.537 [2024-07-15 15:16:54.456372] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:16.537 [2024-07-15 15:16:54.456380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:16.537 [2024-07-15 15:16:54.456390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:16.537 [2024-07-15 15:16:54.456397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:16.537 [2024-07-15 15:16:54.456407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:16.537 [2024-07-15 15:16:54.456414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.537 [2024-07-15 15:16:54.456423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:16.537 [2024-07-15 15:16:54.456432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:21:16.537 [2024-07-15 15:16:54.456441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.537 [2024-07-15 15:16:54.476675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.537 [2024-07-15 15:16:54.476723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:16.537 [2024-07-15 15:16:54.476752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.227 ms 00:21:16.537 [2024-07-15 15:16:54.476762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.537 [2024-07-15 15:16:54.477296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.537 [2024-07-15 15:16:54.477313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:16.537 [2024-07-15 15:16:54.477322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:21:16.537 [2024-07-15 15:16:54.477332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.537 [2024-07-15 15:16:54.526824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.537 [2024-07-15 15:16:54.526882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:16.537 [2024-07-15 15:16:54.526896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.537 [2024-07-15 15:16:54.526910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.537 [2024-07-15 15:16:54.526984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.537 [2024-07-15 15:16:54.527010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:16.537 [2024-07-15 15:16:54.527019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.537 [2024-07-15 15:16:54.527029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.537 [2024-07-15 15:16:54.527127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.537 [2024-07-15 15:16:54.527147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:16.537 [2024-07-15 15:16:54.527156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.537 [2024-07-15 15:16:54.527182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.537 [2024-07-15 15:16:54.527201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.537 [2024-07-15 15:16:54.527213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:16.537 [2024-07-15 15:16:54.527222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.537 [2024-07-15 15:16:54.527232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.796 [2024-07-15 15:16:54.654985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.796 [2024-07-15 15:16:54.655052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:16.796 [2024-07-15 15:16:54.655067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.796 [2024-07-15 15:16:54.655080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.796 [2024-07-15 15:16:54.763070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.796 [2024-07-15 15:16:54.763146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:16.796 [2024-07-15 15:16:54.763162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.796 [2024-07-15 15:16:54.763173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.796 [2024-07-15 15:16:54.763266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.796 [2024-07-15 15:16:54.763279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:16.796 [2024-07-15 15:16:54.763291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.796 [2024-07-15 15:16:54.763301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.796 [2024-07-15 15:16:54.763344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.796 [2024-07-15 15:16:54.763362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:16.796 [2024-07-15 15:16:54.763370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.796 [2024-07-15 15:16:54.763380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.796 [2024-07-15 15:16:54.763483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.796 [2024-07-15 15:16:54.763502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:16.796 [2024-07-15 15:16:54.763512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.796 [2024-07-15 15:16:54.763528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.796 [2024-07-15 15:16:54.763565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.796 [2024-07-15 15:16:54.763580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:16.796 [2024-07-15 15:16:54.763589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.796 [2024-07-15 15:16:54.763599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.796 [2024-07-15 15:16:54.763640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.796 [2024-07-15 15:16:54.763652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:16.796 [2024-07-15 15:16:54.763660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.796 [2024-07-15 15:16:54.763673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.796 [2024-07-15 15:16:54.763718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.796 [2024-07-15 15:16:54.763730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:16.796 [2024-07-15 15:16:54.763738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.796 [2024-07-15 15:16:54.763749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.796 [2024-07-15 15:16:54.763883] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 754.398 ms, result 0 00:21:16.796 true 00:21:16.796 15:16:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 81093 00:21:16.796 15:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 81093 ']' 00:21:16.796 15:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # kill -0 81093 00:21:16.796 15:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # uname 00:21:16.796 15:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.796 15:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81093 00:21:16.796 killing process with pid 81093 00:21:16.796 Received shutdown signal, test time was about 4.000000 seconds 00:21:16.796 00:21:16.796 Latency(us) 00:21:16.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.796 =================================================================================================================== 00:21:16.796 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.796 15:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:16.796 15:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:16.796 15:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81093' 00:21:16.796 15:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # kill 81093 00:21:16.796 15:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # wait 81093 00:21:23.400 15:17:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:21:23.400 15:17:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:21:23.400 15:17:00 ftl.ftl_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.400 15:17:00 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:23.400 15:17:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:21:23.400 Remove shared memory files 00:21:23.400 15:17:00 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:23.400 15:17:00 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:21:23.400 15:17:00 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:21:23.400 15:17:00 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:21:23.400 15:17:00 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:23.400 15:17:00 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:21:23.400 00:21:23.400 real 0m26.986s 00:21:23.400 user 0m29.490s 00:21:23.400 sys 0m1.154s 00:21:23.400 15:17:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:23.400 15:17:00 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:23.400 ************************************ 00:21:23.400 END TEST ftl_bdevperf 00:21:23.400 ************************************ 00:21:23.400 15:17:00 ftl -- common/autotest_common.sh@1142 -- # return 0 00:21:23.400 15:17:00 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:23.400 15:17:00 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:23.400 15:17:00 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:23.400 15:17:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:23.400 ************************************ 00:21:23.400 START TEST ftl_trim 00:21:23.400 ************************************ 00:21:23.400 15:17:00 ftl.ftl_trim -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:23.400 * Looking for test storage... 00:21:23.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=81501 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:23.400 15:17:00 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 81501 00:21:23.400 15:17:00 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81501 ']' 00:21:23.401 15:17:00 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.401 15:17:00 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.401 15:17:00 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.401 15:17:00 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.401 15:17:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:23.401 [2024-07-15 15:17:00.951294] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:21:23.401 [2024-07-15 15:17:00.951438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81501 ] 00:21:23.401 [2024-07-15 15:17:01.117678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:23.401 [2024-07-15 15:17:01.368082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.401 [2024-07-15 15:17:01.368124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.401 [2024-07-15 15:17:01.368147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.340 15:17:02 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.340 15:17:02 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:21:24.340 15:17:02 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:24.340 15:17:02 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:24.340 15:17:02 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:24.340 15:17:02 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:24.340 15:17:02 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:24.340 15:17:02 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:24.599 15:17:02 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:24.599 15:17:02 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:24.599 15:17:02 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:24.599 15:17:02 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:24.599 15:17:02 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:24.599 15:17:02 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:24.599 15:17:02 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:24.599 15:17:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:24.858 15:17:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:24.858 { 00:21:24.858 "name": "nvme0n1", 00:21:24.858 "aliases": [ 00:21:24.858 "92c599b3-7f4c-4347-8492-9a046badf4ff" 00:21:24.858 ], 00:21:24.858 "product_name": "NVMe disk", 00:21:24.858 "block_size": 4096, 00:21:24.858 "num_blocks": 1310720, 00:21:24.858 "uuid": "92c599b3-7f4c-4347-8492-9a046badf4ff", 00:21:24.858 "assigned_rate_limits": { 00:21:24.858 "rw_ios_per_sec": 0, 00:21:24.858 "rw_mbytes_per_sec": 0, 00:21:24.858 "r_mbytes_per_sec": 0, 00:21:24.859 "w_mbytes_per_sec": 0 00:21:24.859 }, 00:21:24.859 "claimed": true, 00:21:24.859 "claim_type": "read_many_write_one", 00:21:24.859 "zoned": false, 00:21:24.859 "supported_io_types": { 00:21:24.859 "read": true, 00:21:24.859 "write": true, 00:21:24.859 "unmap": true, 00:21:24.859 "flush": true, 00:21:24.859 "reset": true, 00:21:24.859 "nvme_admin": true, 00:21:24.859 "nvme_io": true, 00:21:24.859 "nvme_io_md": false, 00:21:24.859 "write_zeroes": true, 00:21:24.859 "zcopy": false, 00:21:24.859 "get_zone_info": false, 00:21:24.859 "zone_management": false, 00:21:24.859 "zone_append": false, 00:21:24.859 "compare": true, 00:21:24.859 "compare_and_write": false, 00:21:24.859 "abort": true, 00:21:24.859 "seek_hole": false, 00:21:24.859 "seek_data": false, 00:21:24.859 "copy": true, 00:21:24.859 "nvme_iov_md": false 00:21:24.859 }, 00:21:24.859 "driver_specific": { 00:21:24.859 "nvme": [ 00:21:24.859 { 00:21:24.859 "pci_address": "0000:00:11.0", 00:21:24.859 "trid": { 00:21:24.859 "trtype": "PCIe", 00:21:24.859 "traddr": "0000:00:11.0" 00:21:24.859 }, 00:21:24.859 "ctrlr_data": { 00:21:24.859 "cntlid": 0, 00:21:24.859 "vendor_id": "0x1b36", 00:21:24.859 "model_number": "QEMU NVMe Ctrl", 00:21:24.859 "serial_number": "12341", 00:21:24.859 "firmware_revision": "8.0.0", 00:21:24.859 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:24.859 "oacs": { 00:21:24.859 "security": 0, 00:21:24.859 "format": 1, 00:21:24.859 "firmware": 0, 00:21:24.859 "ns_manage": 1 00:21:24.859 }, 00:21:24.859 "multi_ctrlr": false, 00:21:24.859 "ana_reporting": false 00:21:24.859 }, 00:21:24.859 "vs": { 00:21:24.859 "nvme_version": "1.4" 00:21:24.859 }, 00:21:24.859 "ns_data": { 00:21:24.859 "id": 1, 00:21:24.859 "can_share": false 00:21:24.859 } 00:21:24.859 } 00:21:24.859 ], 00:21:24.859 "mp_policy": "active_passive" 00:21:24.859 } 00:21:24.859 } 00:21:24.859 ]' 00:21:24.859 15:17:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:24.859 15:17:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:24.859 15:17:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:24.859 15:17:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:24.859 15:17:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:24.859 15:17:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:21:24.859 15:17:02 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:24.859 15:17:02 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:24.859 15:17:02 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:25.116 15:17:02 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:25.116 15:17:02 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:25.116 15:17:03 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f7fd0b7d-f17c-4cf8-9ebf-4f739f154621 00:21:25.116 15:17:03 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:25.116 15:17:03 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f7fd0b7d-f17c-4cf8-9ebf-4f739f154621 00:21:25.373 15:17:03 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:25.631 15:17:03 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=cc85bdcc-f6d7-4334-9ddd-eb9680baf5fd 00:21:25.631 15:17:03 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u cc85bdcc-f6d7-4334-9ddd-eb9680baf5fd 00:21:25.889 15:17:03 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=9efaa1f2-a262-453a-9ad6-510581de9268 00:21:25.889 15:17:03 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9efaa1f2-a262-453a-9ad6-510581de9268 00:21:25.889 15:17:03 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:25.889 15:17:03 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:25.889 15:17:03 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=9efaa1f2-a262-453a-9ad6-510581de9268 00:21:25.889 15:17:03 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:25.889 15:17:03 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 9efaa1f2-a262-453a-9ad6-510581de9268 00:21:25.889 15:17:03 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=9efaa1f2-a262-453a-9ad6-510581de9268 00:21:25.889 15:17:03 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:25.889 15:17:03 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:25.889 15:17:03 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:25.889 15:17:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9efaa1f2-a262-453a-9ad6-510581de9268 00:21:26.148 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:26.148 { 00:21:26.148 "name": "9efaa1f2-a262-453a-9ad6-510581de9268", 00:21:26.148 "aliases": [ 00:21:26.148 "lvs/nvme0n1p0" 00:21:26.148 ], 00:21:26.148 "product_name": "Logical Volume", 00:21:26.148 "block_size": 4096, 00:21:26.148 "num_blocks": 26476544, 00:21:26.148 "uuid": "9efaa1f2-a262-453a-9ad6-510581de9268", 00:21:26.148 "assigned_rate_limits": { 00:21:26.148 "rw_ios_per_sec": 0, 00:21:26.148 "rw_mbytes_per_sec": 0, 00:21:26.148 "r_mbytes_per_sec": 0, 00:21:26.148 "w_mbytes_per_sec": 0 00:21:26.148 }, 00:21:26.148 "claimed": false, 00:21:26.148 "zoned": false, 00:21:26.148 "supported_io_types": { 00:21:26.148 "read": true, 00:21:26.148 "write": true, 00:21:26.148 "unmap": true, 00:21:26.148 "flush": false, 00:21:26.148 "reset": true, 00:21:26.148 "nvme_admin": false, 00:21:26.148 "nvme_io": false, 00:21:26.148 "nvme_io_md": false, 00:21:26.148 "write_zeroes": true, 00:21:26.148 "zcopy": false, 00:21:26.148 "get_zone_info": false, 00:21:26.148 "zone_management": false, 00:21:26.148 "zone_append": false, 00:21:26.148 "compare": false, 00:21:26.148 "compare_and_write": false, 00:21:26.148 "abort": false, 00:21:26.148 "seek_hole": true, 00:21:26.148 "seek_data": true, 00:21:26.148 "copy": false, 00:21:26.148 "nvme_iov_md": false 00:21:26.148 }, 00:21:26.148 "driver_specific": { 00:21:26.148 "lvol": { 00:21:26.148 "lvol_store_uuid": "cc85bdcc-f6d7-4334-9ddd-eb9680baf5fd", 00:21:26.148 "base_bdev": "nvme0n1", 00:21:26.148 "thin_provision": true, 00:21:26.148 "num_allocated_clusters": 0, 00:21:26.148 "snapshot": false, 00:21:26.148 "clone": false, 00:21:26.148 "esnap_clone": false 00:21:26.148 } 00:21:26.148 } 00:21:26.148 } 00:21:26.148 ]' 00:21:26.148 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:26.148 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:26.148 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:26.148 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:26.148 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:26.148 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:26.148 15:17:04 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:26.148 15:17:04 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:26.148 15:17:04 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:26.406 15:17:04 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:26.406 15:17:04 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:26.406 15:17:04 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 9efaa1f2-a262-453a-9ad6-510581de9268 00:21:26.406 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=9efaa1f2-a262-453a-9ad6-510581de9268 00:21:26.406 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:26.406 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:26.406 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:26.406 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9efaa1f2-a262-453a-9ad6-510581de9268 00:21:26.665 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:26.665 { 00:21:26.665 "name": "9efaa1f2-a262-453a-9ad6-510581de9268", 00:21:26.665 "aliases": [ 00:21:26.665 "lvs/nvme0n1p0" 00:21:26.665 ], 00:21:26.665 "product_name": "Logical Volume", 00:21:26.665 "block_size": 4096, 00:21:26.665 "num_blocks": 26476544, 00:21:26.665 "uuid": "9efaa1f2-a262-453a-9ad6-510581de9268", 00:21:26.665 "assigned_rate_limits": { 00:21:26.665 "rw_ios_per_sec": 0, 00:21:26.665 "rw_mbytes_per_sec": 0, 00:21:26.665 "r_mbytes_per_sec": 0, 00:21:26.665 "w_mbytes_per_sec": 0 00:21:26.665 }, 00:21:26.665 "claimed": false, 00:21:26.665 "zoned": false, 00:21:26.665 "supported_io_types": { 00:21:26.665 "read": true, 00:21:26.665 "write": true, 00:21:26.665 "unmap": true, 00:21:26.665 "flush": false, 00:21:26.665 "reset": true, 00:21:26.665 "nvme_admin": false, 00:21:26.665 "nvme_io": false, 00:21:26.665 "nvme_io_md": false, 00:21:26.665 "write_zeroes": true, 00:21:26.665 "zcopy": false, 00:21:26.665 "get_zone_info": false, 00:21:26.665 "zone_management": false, 00:21:26.665 "zone_append": false, 00:21:26.665 "compare": false, 00:21:26.665 "compare_and_write": false, 00:21:26.665 "abort": false, 00:21:26.665 "seek_hole": true, 00:21:26.665 "seek_data": true, 00:21:26.665 "copy": false, 00:21:26.665 "nvme_iov_md": false 00:21:26.665 }, 00:21:26.665 "driver_specific": { 00:21:26.665 "lvol": { 00:21:26.665 "lvol_store_uuid": "cc85bdcc-f6d7-4334-9ddd-eb9680baf5fd", 00:21:26.665 "base_bdev": "nvme0n1", 00:21:26.665 "thin_provision": true, 00:21:26.665 "num_allocated_clusters": 0, 00:21:26.665 "snapshot": false, 00:21:26.665 "clone": false, 00:21:26.665 "esnap_clone": false 00:21:26.665 } 00:21:26.665 } 00:21:26.665 } 00:21:26.665 ]' 00:21:26.665 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:26.665 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:26.665 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:26.665 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:26.665 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:26.665 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:26.665 15:17:04 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:26.665 15:17:04 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:26.923 15:17:04 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:26.923 15:17:04 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:26.923 15:17:04 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 9efaa1f2-a262-453a-9ad6-510581de9268 00:21:26.923 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=9efaa1f2-a262-453a-9ad6-510581de9268 00:21:26.923 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:26.923 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:26.923 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:26.923 15:17:04 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9efaa1f2-a262-453a-9ad6-510581de9268 00:21:27.181 15:17:05 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:27.181 { 00:21:27.181 "name": "9efaa1f2-a262-453a-9ad6-510581de9268", 00:21:27.181 "aliases": [ 00:21:27.181 "lvs/nvme0n1p0" 00:21:27.181 ], 00:21:27.181 "product_name": "Logical Volume", 00:21:27.181 "block_size": 4096, 00:21:27.181 "num_blocks": 26476544, 00:21:27.181 "uuid": "9efaa1f2-a262-453a-9ad6-510581de9268", 00:21:27.181 "assigned_rate_limits": { 00:21:27.181 "rw_ios_per_sec": 0, 00:21:27.181 "rw_mbytes_per_sec": 0, 00:21:27.181 "r_mbytes_per_sec": 0, 00:21:27.181 "w_mbytes_per_sec": 0 00:21:27.181 }, 00:21:27.181 "claimed": false, 00:21:27.181 "zoned": false, 00:21:27.181 "supported_io_types": { 00:21:27.181 "read": true, 00:21:27.181 "write": true, 00:21:27.181 "unmap": true, 00:21:27.181 "flush": false, 00:21:27.181 "reset": true, 00:21:27.181 "nvme_admin": false, 00:21:27.181 "nvme_io": false, 00:21:27.181 "nvme_io_md": false, 00:21:27.181 "write_zeroes": true, 00:21:27.181 "zcopy": false, 00:21:27.181 "get_zone_info": false, 00:21:27.181 "zone_management": false, 00:21:27.181 "zone_append": false, 00:21:27.181 "compare": false, 00:21:27.181 "compare_and_write": false, 00:21:27.181 "abort": false, 00:21:27.181 "seek_hole": true, 00:21:27.181 "seek_data": true, 00:21:27.181 "copy": false, 00:21:27.181 "nvme_iov_md": false 00:21:27.181 }, 00:21:27.181 "driver_specific": { 00:21:27.181 "lvol": { 00:21:27.181 "lvol_store_uuid": "cc85bdcc-f6d7-4334-9ddd-eb9680baf5fd", 00:21:27.181 "base_bdev": "nvme0n1", 00:21:27.181 "thin_provision": true, 00:21:27.181 "num_allocated_clusters": 0, 00:21:27.181 "snapshot": false, 00:21:27.181 "clone": false, 00:21:27.181 "esnap_clone": false 00:21:27.181 } 00:21:27.181 } 00:21:27.181 } 00:21:27.181 ]' 00:21:27.181 15:17:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:27.181 15:17:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:27.181 15:17:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:27.181 15:17:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:27.181 15:17:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:27.181 15:17:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:27.181 15:17:05 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:27.181 15:17:05 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9efaa1f2-a262-453a-9ad6-510581de9268 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:27.441 [2024-07-15 15:17:05.357165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.441 [2024-07-15 15:17:05.357216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:27.441 [2024-07-15 15:17:05.357230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:27.441 [2024-07-15 15:17:05.357242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.441 [2024-07-15 15:17:05.360560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.441 [2024-07-15 15:17:05.360602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:27.441 [2024-07-15 15:17:05.360612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.298 ms 00:21:27.441 [2024-07-15 15:17:05.360621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.441 [2024-07-15 15:17:05.360748] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:27.441 [2024-07-15 15:17:05.361916] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:27.441 [2024-07-15 15:17:05.361946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.441 [2024-07-15 15:17:05.361958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:27.441 [2024-07-15 15:17:05.361967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.223 ms 00:21:27.441 [2024-07-15 15:17:05.361976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.441 [2024-07-15 15:17:05.362077] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3a743e15-4abe-4742-97e2-4f048b457e12 00:21:27.441 [2024-07-15 15:17:05.363547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.441 [2024-07-15 15:17:05.363579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:27.441 [2024-07-15 15:17:05.363593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:27.441 [2024-07-15 15:17:05.363601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.441 [2024-07-15 15:17:05.371216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.441 [2024-07-15 15:17:05.371249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:27.442 [2024-07-15 15:17:05.371263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.545 ms 00:21:27.442 [2024-07-15 15:17:05.371273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.442 [2024-07-15 15:17:05.371451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.442 [2024-07-15 15:17:05.371470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:27.442 [2024-07-15 15:17:05.371483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:21:27.442 [2024-07-15 15:17:05.371492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.442 [2024-07-15 15:17:05.371549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.442 [2024-07-15 15:17:05.371559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:27.442 [2024-07-15 15:17:05.371572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:27.442 [2024-07-15 15:17:05.371581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.442 [2024-07-15 15:17:05.371626] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:27.442 [2024-07-15 15:17:05.378088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.442 [2024-07-15 15:17:05.378120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:27.442 [2024-07-15 15:17:05.378146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.486 ms 00:21:27.442 [2024-07-15 15:17:05.378156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.442 [2024-07-15 15:17:05.378221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.442 [2024-07-15 15:17:05.378234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:27.442 [2024-07-15 15:17:05.378243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:27.442 [2024-07-15 15:17:05.378253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.442 [2024-07-15 15:17:05.378283] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:27.442 [2024-07-15 15:17:05.378435] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:27.442 [2024-07-15 15:17:05.378449] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:27.442 [2024-07-15 15:17:05.378464] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:27.442 [2024-07-15 15:17:05.378476] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:27.442 [2024-07-15 15:17:05.378488] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:27.442 [2024-07-15 15:17:05.378497] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:27.442 [2024-07-15 15:17:05.378508] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:27.442 [2024-07-15 15:17:05.378519] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:27.442 [2024-07-15 15:17:05.378550] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:27.442 [2024-07-15 15:17:05.378560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.442 [2024-07-15 15:17:05.378570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:27.442 [2024-07-15 15:17:05.378579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:21:27.442 [2024-07-15 15:17:05.378590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.442 [2024-07-15 15:17:05.378685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.442 [2024-07-15 15:17:05.378702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:27.442 [2024-07-15 15:17:05.378711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:27.442 [2024-07-15 15:17:05.378722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.442 [2024-07-15 15:17:05.378845] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:27.442 [2024-07-15 15:17:05.378867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:27.442 [2024-07-15 15:17:05.378876] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:27.442 [2024-07-15 15:17:05.378887] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.442 [2024-07-15 15:17:05.378895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:27.442 [2024-07-15 15:17:05.378904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:27.442 [2024-07-15 15:17:05.378912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:27.442 [2024-07-15 15:17:05.378922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:27.442 [2024-07-15 15:17:05.378930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:27.442 [2024-07-15 15:17:05.378939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:27.442 [2024-07-15 15:17:05.378947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:27.442 [2024-07-15 15:17:05.378957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:27.442 [2024-07-15 15:17:05.378965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:27.442 [2024-07-15 15:17:05.378976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:27.442 [2024-07-15 15:17:05.378983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:27.442 [2024-07-15 15:17:05.379003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.442 [2024-07-15 15:17:05.379011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:27.442 [2024-07-15 15:17:05.379023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:27.442 [2024-07-15 15:17:05.379030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.442 [2024-07-15 15:17:05.379039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:27.442 [2024-07-15 15:17:05.379047] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:27.442 [2024-07-15 15:17:05.379056] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.442 [2024-07-15 15:17:05.379064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:27.442 [2024-07-15 15:17:05.379073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:27.442 [2024-07-15 15:17:05.379080] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.442 [2024-07-15 15:17:05.379089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:27.442 [2024-07-15 15:17:05.379097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:27.442 [2024-07-15 15:17:05.379105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.442 [2024-07-15 15:17:05.379113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:27.442 [2024-07-15 15:17:05.379123] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:27.442 [2024-07-15 15:17:05.379131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.442 [2024-07-15 15:17:05.379140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:27.442 [2024-07-15 15:17:05.379147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:27.442 [2024-07-15 15:17:05.379157] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:27.442 [2024-07-15 15:17:05.379165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:27.442 [2024-07-15 15:17:05.379173] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:27.442 [2024-07-15 15:17:05.379181] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:27.442 [2024-07-15 15:17:05.379190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:27.442 [2024-07-15 15:17:05.379198] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:27.442 [2024-07-15 15:17:05.379208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.442 [2024-07-15 15:17:05.379215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:27.442 [2024-07-15 15:17:05.379224] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:27.442 [2024-07-15 15:17:05.379231] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.442 [2024-07-15 15:17:05.379241] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:27.442 [2024-07-15 15:17:05.379249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:27.442 [2024-07-15 15:17:05.379259] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:27.442 [2024-07-15 15:17:05.379267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.442 [2024-07-15 15:17:05.379277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:27.442 [2024-07-15 15:17:05.379285] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:27.442 [2024-07-15 15:17:05.379296] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:27.442 [2024-07-15 15:17:05.379304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:27.442 [2024-07-15 15:17:05.379313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:27.442 [2024-07-15 15:17:05.379320] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:27.442 [2024-07-15 15:17:05.379334] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:27.442 [2024-07-15 15:17:05.379347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:27.442 [2024-07-15 15:17:05.379360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:27.442 [2024-07-15 15:17:05.379368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:27.442 [2024-07-15 15:17:05.379378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:27.442 [2024-07-15 15:17:05.379386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:27.442 [2024-07-15 15:17:05.379396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:27.442 [2024-07-15 15:17:05.379404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:27.442 [2024-07-15 15:17:05.379414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:27.442 [2024-07-15 15:17:05.379422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:27.442 [2024-07-15 15:17:05.379433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:27.442 [2024-07-15 15:17:05.379441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:27.442 [2024-07-15 15:17:05.379452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:27.442 [2024-07-15 15:17:05.379460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:27.442 [2024-07-15 15:17:05.379470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:27.442 [2024-07-15 15:17:05.379478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:27.442 [2024-07-15 15:17:05.379488] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:27.443 [2024-07-15 15:17:05.379497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:27.443 [2024-07-15 15:17:05.379507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:27.443 [2024-07-15 15:17:05.379515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:27.443 [2024-07-15 15:17:05.379525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:27.443 [2024-07-15 15:17:05.379533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:27.443 [2024-07-15 15:17:05.379543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.443 [2024-07-15 15:17:05.379551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:27.443 [2024-07-15 15:17:05.379562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:21:27.443 [2024-07-15 15:17:05.379570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.443 [2024-07-15 15:17:05.379675] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:27.443 [2024-07-15 15:17:05.379701] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:30.750 [2024-07-15 15:17:08.495740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.750 [2024-07-15 15:17:08.495809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:30.750 [2024-07-15 15:17:08.495827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3122.064 ms 00:21:30.750 [2024-07-15 15:17:08.495852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.750 [2024-07-15 15:17:08.541431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.750 [2024-07-15 15:17:08.541484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:30.750 [2024-07-15 15:17:08.541501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.318 ms 00:21:30.750 [2024-07-15 15:17:08.541509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.750 [2024-07-15 15:17:08.541680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.750 [2024-07-15 15:17:08.541691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:30.750 [2024-07-15 15:17:08.541702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:30.750 [2024-07-15 15:17:08.541712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.750 [2024-07-15 15:17:08.607615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.750 [2024-07-15 15:17:08.607676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:30.750 [2024-07-15 15:17:08.607697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.990 ms 00:21:30.750 [2024-07-15 15:17:08.607709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.750 [2024-07-15 15:17:08.607831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.750 [2024-07-15 15:17:08.607845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:30.750 [2024-07-15 15:17:08.607859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:30.750 [2024-07-15 15:17:08.607869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.750 [2024-07-15 15:17:08.608364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.750 [2024-07-15 15:17:08.608388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:30.750 [2024-07-15 15:17:08.608403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:21:30.750 [2024-07-15 15:17:08.608414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.750 [2024-07-15 15:17:08.608561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.750 [2024-07-15 15:17:08.608579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:30.750 [2024-07-15 15:17:08.608593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:21:30.750 [2024-07-15 15:17:08.608603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.750 [2024-07-15 15:17:08.635621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.750 [2024-07-15 15:17:08.635678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:30.750 [2024-07-15 15:17:08.635695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.023 ms 00:21:30.750 [2024-07-15 15:17:08.635704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.750 [2024-07-15 15:17:08.649946] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:30.750 [2024-07-15 15:17:08.667453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.750 [2024-07-15 15:17:08.667518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:30.750 [2024-07-15 15:17:08.667535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.654 ms 00:21:30.750 [2024-07-15 15:17:08.667545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.750 [2024-07-15 15:17:08.773611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.750 [2024-07-15 15:17:08.773676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:30.750 [2024-07-15 15:17:08.773690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.150 ms 00:21:30.750 [2024-07-15 15:17:08.773699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.750 [2024-07-15 15:17:08.773935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.750 [2024-07-15 15:17:08.773968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:30.750 [2024-07-15 15:17:08.773977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:21:30.750 [2024-07-15 15:17:08.774003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.750 [2024-07-15 15:17:08.814162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.751 [2024-07-15 15:17:08.814224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:30.751 [2024-07-15 15:17:08.814241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.200 ms 00:21:30.751 [2024-07-15 15:17:08.814251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.751 [2024-07-15 15:17:08.855515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.751 [2024-07-15 15:17:08.855570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:30.751 [2024-07-15 15:17:08.855586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.246 ms 00:21:30.751 [2024-07-15 15:17:08.855596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.751 [2024-07-15 15:17:08.856517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.751 [2024-07-15 15:17:08.856548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:30.751 [2024-07-15 15:17:08.856558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:21:30.751 [2024-07-15 15:17:08.856567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.011 [2024-07-15 15:17:08.978603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.011 [2024-07-15 15:17:08.978672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:31.011 [2024-07-15 15:17:08.978688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 122.233 ms 00:21:31.011 [2024-07-15 15:17:08.978702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.011 [2024-07-15 15:17:09.022001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.011 [2024-07-15 15:17:09.022055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:31.011 [2024-07-15 15:17:09.022069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.273 ms 00:21:31.011 [2024-07-15 15:17:09.022082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.011 [2024-07-15 15:17:09.062666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.011 [2024-07-15 15:17:09.062720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:31.011 [2024-07-15 15:17:09.062734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.575 ms 00:21:31.011 [2024-07-15 15:17:09.062744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.011 [2024-07-15 15:17:09.103810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.011 [2024-07-15 15:17:09.103869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:31.011 [2024-07-15 15:17:09.103883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.054 ms 00:21:31.011 [2024-07-15 15:17:09.103892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.011 [2024-07-15 15:17:09.104013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.011 [2024-07-15 15:17:09.104040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:31.011 [2024-07-15 15:17:09.104048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:31.011 [2024-07-15 15:17:09.104060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.011 [2024-07-15 15:17:09.104161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.011 [2024-07-15 15:17:09.104172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:31.011 [2024-07-15 15:17:09.104180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:31.011 [2024-07-15 15:17:09.104209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.011 [2024-07-15 15:17:09.105256] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:31.011 [2024-07-15 15:17:09.111338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3754.985 ms, result 0 00:21:31.011 [2024-07-15 15:17:09.112189] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:31.011 { 00:21:31.011 "name": "ftl0", 00:21:31.011 "uuid": "3a743e15-4abe-4742-97e2-4f048b457e12" 00:21:31.011 } 00:21:31.269 15:17:09 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:31.269 15:17:09 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:21:31.269 15:17:09 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:31.269 15:17:09 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local i 00:21:31.269 15:17:09 ftl.ftl_trim -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:31.269 15:17:09 ftl.ftl_trim -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:31.269 15:17:09 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:31.269 15:17:09 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:31.528 [ 00:21:31.528 { 00:21:31.528 "name": "ftl0", 00:21:31.528 "aliases": [ 00:21:31.528 "3a743e15-4abe-4742-97e2-4f048b457e12" 00:21:31.528 ], 00:21:31.528 "product_name": "FTL disk", 00:21:31.528 "block_size": 4096, 00:21:31.528 "num_blocks": 23592960, 00:21:31.528 "uuid": "3a743e15-4abe-4742-97e2-4f048b457e12", 00:21:31.528 "assigned_rate_limits": { 00:21:31.528 "rw_ios_per_sec": 0, 00:21:31.528 "rw_mbytes_per_sec": 0, 00:21:31.528 "r_mbytes_per_sec": 0, 00:21:31.528 "w_mbytes_per_sec": 0 00:21:31.528 }, 00:21:31.528 "claimed": false, 00:21:31.528 "zoned": false, 00:21:31.528 "supported_io_types": { 00:21:31.528 "read": true, 00:21:31.528 "write": true, 00:21:31.528 "unmap": true, 00:21:31.528 "flush": true, 00:21:31.528 "reset": false, 00:21:31.528 "nvme_admin": false, 00:21:31.528 "nvme_io": false, 00:21:31.528 "nvme_io_md": false, 00:21:31.528 "write_zeroes": true, 00:21:31.528 "zcopy": false, 00:21:31.528 "get_zone_info": false, 00:21:31.528 "zone_management": false, 00:21:31.528 "zone_append": false, 00:21:31.528 "compare": false, 00:21:31.528 "compare_and_write": false, 00:21:31.528 "abort": false, 00:21:31.528 "seek_hole": false, 00:21:31.528 "seek_data": false, 00:21:31.528 "copy": false, 00:21:31.528 "nvme_iov_md": false 00:21:31.528 }, 00:21:31.528 "driver_specific": { 00:21:31.528 "ftl": { 00:21:31.528 "base_bdev": "9efaa1f2-a262-453a-9ad6-510581de9268", 00:21:31.528 "cache": "nvc0n1p0" 00:21:31.528 } 00:21:31.528 } 00:21:31.528 } 00:21:31.528 ] 00:21:31.528 15:17:09 ftl.ftl_trim -- common/autotest_common.sh@905 -- # return 0 00:21:31.528 15:17:09 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:31.528 15:17:09 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:31.787 15:17:09 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:31.787 15:17:09 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:32.046 15:17:09 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:32.046 { 00:21:32.046 "name": "ftl0", 00:21:32.046 "aliases": [ 00:21:32.046 "3a743e15-4abe-4742-97e2-4f048b457e12" 00:21:32.046 ], 00:21:32.046 "product_name": "FTL disk", 00:21:32.046 "block_size": 4096, 00:21:32.046 "num_blocks": 23592960, 00:21:32.046 "uuid": "3a743e15-4abe-4742-97e2-4f048b457e12", 00:21:32.046 "assigned_rate_limits": { 00:21:32.046 "rw_ios_per_sec": 0, 00:21:32.046 "rw_mbytes_per_sec": 0, 00:21:32.046 "r_mbytes_per_sec": 0, 00:21:32.047 "w_mbytes_per_sec": 0 00:21:32.047 }, 00:21:32.047 "claimed": false, 00:21:32.047 "zoned": false, 00:21:32.047 "supported_io_types": { 00:21:32.047 "read": true, 00:21:32.047 "write": true, 00:21:32.047 "unmap": true, 00:21:32.047 "flush": true, 00:21:32.047 "reset": false, 00:21:32.047 "nvme_admin": false, 00:21:32.047 "nvme_io": false, 00:21:32.047 "nvme_io_md": false, 00:21:32.047 "write_zeroes": true, 00:21:32.047 "zcopy": false, 00:21:32.047 "get_zone_info": false, 00:21:32.047 "zone_management": false, 00:21:32.047 "zone_append": false, 00:21:32.047 "compare": false, 00:21:32.047 "compare_and_write": false, 00:21:32.047 "abort": false, 00:21:32.047 "seek_hole": false, 00:21:32.047 "seek_data": false, 00:21:32.047 "copy": false, 00:21:32.047 "nvme_iov_md": false 00:21:32.047 }, 00:21:32.047 "driver_specific": { 00:21:32.047 "ftl": { 00:21:32.047 "base_bdev": "9efaa1f2-a262-453a-9ad6-510581de9268", 00:21:32.047 "cache": "nvc0n1p0" 00:21:32.047 } 00:21:32.047 } 00:21:32.047 } 00:21:32.047 ]' 00:21:32.047 15:17:09 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:32.047 15:17:09 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:32.047 15:17:09 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:32.047 [2024-07-15 15:17:10.093451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.047 [2024-07-15 15:17:10.093505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:32.047 [2024-07-15 15:17:10.093523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:32.047 [2024-07-15 15:17:10.093531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.047 [2024-07-15 15:17:10.093566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:32.047 [2024-07-15 15:17:10.097699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.047 [2024-07-15 15:17:10.097728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:32.047 [2024-07-15 15:17:10.097738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.126 ms 00:21:32.047 [2024-07-15 15:17:10.097752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.047 [2024-07-15 15:17:10.098317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.047 [2024-07-15 15:17:10.098343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:32.047 [2024-07-15 15:17:10.098352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:21:32.047 [2024-07-15 15:17:10.098362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.047 [2024-07-15 15:17:10.101494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.047 [2024-07-15 15:17:10.101514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:32.047 [2024-07-15 15:17:10.101539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.110 ms 00:21:32.047 [2024-07-15 15:17:10.101549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.047 [2024-07-15 15:17:10.107851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.047 [2024-07-15 15:17:10.107886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:32.047 [2024-07-15 15:17:10.107897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.284 ms 00:21:32.047 [2024-07-15 15:17:10.107906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.047 [2024-07-15 15:17:10.149182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.047 [2024-07-15 15:17:10.149231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:32.047 [2024-07-15 15:17:10.149244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.244 ms 00:21:32.047 [2024-07-15 15:17:10.149257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.308 [2024-07-15 15:17:10.176014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.308 [2024-07-15 15:17:10.176071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:32.308 [2024-07-15 15:17:10.176101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.719 ms 00:21:32.308 [2024-07-15 15:17:10.176128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.308 [2024-07-15 15:17:10.176408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.308 [2024-07-15 15:17:10.176430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:32.308 [2024-07-15 15:17:10.176440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:21:32.308 [2024-07-15 15:17:10.176451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.308 [2024-07-15 15:17:10.220430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.308 [2024-07-15 15:17:10.220481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:32.308 [2024-07-15 15:17:10.220493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.030 ms 00:21:32.308 [2024-07-15 15:17:10.220503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.308 [2024-07-15 15:17:10.260873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.308 [2024-07-15 15:17:10.260925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:32.308 [2024-07-15 15:17:10.260937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.329 ms 00:21:32.308 [2024-07-15 15:17:10.260966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.308 [2024-07-15 15:17:10.303694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.308 [2024-07-15 15:17:10.303747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:32.308 [2024-07-15 15:17:10.303762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.708 ms 00:21:32.308 [2024-07-15 15:17:10.303772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.308 [2024-07-15 15:17:10.344376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.308 [2024-07-15 15:17:10.344440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:32.308 [2024-07-15 15:17:10.344454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.521 ms 00:21:32.308 [2024-07-15 15:17:10.344462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.308 [2024-07-15 15:17:10.344550] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:32.308 [2024-07-15 15:17:10.344568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.344988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.345010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.345021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.345028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.345038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:32.308 [2024-07-15 15:17:10.345046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:32.309 [2024-07-15 15:17:10.345504] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:32.309 [2024-07-15 15:17:10.345511] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3a743e15-4abe-4742-97e2-4f048b457e12 00:21:32.309 [2024-07-15 15:17:10.345523] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:32.309 [2024-07-15 15:17:10.345531] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:32.309 [2024-07-15 15:17:10.345542] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:32.309 [2024-07-15 15:17:10.345550] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:32.309 [2024-07-15 15:17:10.345559] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:32.309 [2024-07-15 15:17:10.345567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:32.309 [2024-07-15 15:17:10.345576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:32.309 [2024-07-15 15:17:10.345583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:32.309 [2024-07-15 15:17:10.345591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:32.309 [2024-07-15 15:17:10.345599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.309 [2024-07-15 15:17:10.345609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:32.309 [2024-07-15 15:17:10.345617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:21:32.309 [2024-07-15 15:17:10.345626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.309 [2024-07-15 15:17:10.367062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.309 [2024-07-15 15:17:10.367104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:32.309 [2024-07-15 15:17:10.367116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.444 ms 00:21:32.309 [2024-07-15 15:17:10.367129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.309 [2024-07-15 15:17:10.367737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.309 [2024-07-15 15:17:10.367758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:32.309 [2024-07-15 15:17:10.367768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:21:32.309 [2024-07-15 15:17:10.367778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.568 [2024-07-15 15:17:10.442752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.568 [2024-07-15 15:17:10.442825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:32.568 [2024-07-15 15:17:10.442839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.568 [2024-07-15 15:17:10.442850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.568 [2024-07-15 15:17:10.442974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.568 [2024-07-15 15:17:10.442987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:32.568 [2024-07-15 15:17:10.442997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.568 [2024-07-15 15:17:10.443015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.568 [2024-07-15 15:17:10.443085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.568 [2024-07-15 15:17:10.443104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:32.568 [2024-07-15 15:17:10.443113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.568 [2024-07-15 15:17:10.443126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.568 [2024-07-15 15:17:10.443161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.568 [2024-07-15 15:17:10.443171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:32.568 [2024-07-15 15:17:10.443180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.568 [2024-07-15 15:17:10.443190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.568 [2024-07-15 15:17:10.582371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.568 [2024-07-15 15:17:10.582441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:32.568 [2024-07-15 15:17:10.582453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.568 [2024-07-15 15:17:10.582481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.828 [2024-07-15 15:17:10.698525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.828 [2024-07-15 15:17:10.698600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:32.828 [2024-07-15 15:17:10.698614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.828 [2024-07-15 15:17:10.698625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.828 [2024-07-15 15:17:10.698735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.828 [2024-07-15 15:17:10.698748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:32.828 [2024-07-15 15:17:10.698761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.828 [2024-07-15 15:17:10.698775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.828 [2024-07-15 15:17:10.698834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.828 [2024-07-15 15:17:10.698846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:32.828 [2024-07-15 15:17:10.698854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.828 [2024-07-15 15:17:10.698866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.828 [2024-07-15 15:17:10.699044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.828 [2024-07-15 15:17:10.699069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:32.828 [2024-07-15 15:17:10.699097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.828 [2024-07-15 15:17:10.699112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.828 [2024-07-15 15:17:10.699175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.828 [2024-07-15 15:17:10.699192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:32.828 [2024-07-15 15:17:10.699201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.828 [2024-07-15 15:17:10.699212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.828 [2024-07-15 15:17:10.699267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.828 [2024-07-15 15:17:10.699280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:32.828 [2024-07-15 15:17:10.699289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.828 [2024-07-15 15:17:10.699305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.828 [2024-07-15 15:17:10.699366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.828 [2024-07-15 15:17:10.699379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:32.828 [2024-07-15 15:17:10.699388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.828 [2024-07-15 15:17:10.699400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.828 [2024-07-15 15:17:10.699603] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 607.303 ms, result 0 00:21:32.828 true 00:21:32.828 15:17:10 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 81501 00:21:32.828 15:17:10 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81501 ']' 00:21:32.828 15:17:10 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81501 00:21:32.828 15:17:10 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:21:32.828 15:17:10 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.828 15:17:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81501 00:21:32.828 15:17:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:32.828 killing process with pid 81501 00:21:32.828 15:17:10 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:32.828 15:17:10 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81501' 00:21:32.828 15:17:10 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81501 00:21:32.828 15:17:10 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81501 00:21:40.944 15:17:17 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:40.944 65536+0 records in 00:21:40.944 65536+0 records out 00:21:40.944 268435456 bytes (268 MB, 256 MiB) copied, 0.895505 s, 300 MB/s 00:21:40.944 15:17:18 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:40.944 [2024-07-15 15:17:18.580082] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:21:40.944 [2024-07-15 15:17:18.580203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81743 ] 00:21:40.944 [2024-07-15 15:17:18.742317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.944 [2024-07-15 15:17:18.981934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.535 [2024-07-15 15:17:19.400115] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:41.535 [2024-07-15 15:17:19.400190] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:41.535 [2024-07-15 15:17:19.557927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.535 [2024-07-15 15:17:19.557988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:41.535 [2024-07-15 15:17:19.558009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:41.535 [2024-07-15 15:17:19.558016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.535 [2024-07-15 15:17:19.560975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.535 [2024-07-15 15:17:19.561025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:41.535 [2024-07-15 15:17:19.561037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.945 ms 00:21:41.535 [2024-07-15 15:17:19.561045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.535 [2024-07-15 15:17:19.561170] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:41.535 [2024-07-15 15:17:19.562300] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:41.535 [2024-07-15 15:17:19.562333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.535 [2024-07-15 15:17:19.562342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:41.535 [2024-07-15 15:17:19.562351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:21:41.535 [2024-07-15 15:17:19.562358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.535 [2024-07-15 15:17:19.563806] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:41.535 [2024-07-15 15:17:19.585570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.535 [2024-07-15 15:17:19.585653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:41.535 [2024-07-15 15:17:19.585689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.804 ms 00:21:41.535 [2024-07-15 15:17:19.585698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.535 [2024-07-15 15:17:19.585875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.535 [2024-07-15 15:17:19.585894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:41.535 [2024-07-15 15:17:19.585903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:41.535 [2024-07-15 15:17:19.585911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.535 [2024-07-15 15:17:19.593349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.535 [2024-07-15 15:17:19.593392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:41.535 [2024-07-15 15:17:19.593418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.404 ms 00:21:41.535 [2024-07-15 15:17:19.593426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.535 [2024-07-15 15:17:19.593539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.535 [2024-07-15 15:17:19.593556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:41.535 [2024-07-15 15:17:19.593566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:41.535 [2024-07-15 15:17:19.593573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.535 [2024-07-15 15:17:19.593614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.535 [2024-07-15 15:17:19.593622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:41.535 [2024-07-15 15:17:19.593630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:41.535 [2024-07-15 15:17:19.593640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.535 [2024-07-15 15:17:19.593666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:41.535 [2024-07-15 15:17:19.599552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.535 [2024-07-15 15:17:19.599609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:41.535 [2024-07-15 15:17:19.599625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.904 ms 00:21:41.535 [2024-07-15 15:17:19.599636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.535 [2024-07-15 15:17:19.599760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.535 [2024-07-15 15:17:19.599776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:41.535 [2024-07-15 15:17:19.599787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:41.535 [2024-07-15 15:17:19.599797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.535 [2024-07-15 15:17:19.599830] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:41.535 [2024-07-15 15:17:19.599857] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:41.535 [2024-07-15 15:17:19.599909] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:41.535 [2024-07-15 15:17:19.599945] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:41.535 [2024-07-15 15:17:19.600094] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:41.535 [2024-07-15 15:17:19.600109] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:41.535 [2024-07-15 15:17:19.600124] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:41.535 [2024-07-15 15:17:19.600153] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:41.535 [2024-07-15 15:17:19.600165] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:41.535 [2024-07-15 15:17:19.600177] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:41.535 [2024-07-15 15:17:19.600193] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:41.535 [2024-07-15 15:17:19.600204] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:41.535 [2024-07-15 15:17:19.600215] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:41.535 [2024-07-15 15:17:19.600227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.535 [2024-07-15 15:17:19.600238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:41.535 [2024-07-15 15:17:19.600250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:21:41.535 [2024-07-15 15:17:19.600261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.535 [2024-07-15 15:17:19.600407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.535 [2024-07-15 15:17:19.600423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:41.535 [2024-07-15 15:17:19.600435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:21:41.535 [2024-07-15 15:17:19.600449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.535 [2024-07-15 15:17:19.600583] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:41.535 [2024-07-15 15:17:19.600605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:41.535 [2024-07-15 15:17:19.600619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:41.535 [2024-07-15 15:17:19.600632] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.535 [2024-07-15 15:17:19.600646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:41.535 [2024-07-15 15:17:19.600657] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:41.535 [2024-07-15 15:17:19.600668] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:41.535 [2024-07-15 15:17:19.600679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:41.535 [2024-07-15 15:17:19.600691] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:41.535 [2024-07-15 15:17:19.600701] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:41.535 [2024-07-15 15:17:19.600713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:41.535 [2024-07-15 15:17:19.600724] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:41.535 [2024-07-15 15:17:19.600734] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:41.536 [2024-07-15 15:17:19.600746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:41.536 [2024-07-15 15:17:19.600757] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:41.536 [2024-07-15 15:17:19.600768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.536 [2024-07-15 15:17:19.600779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:41.536 [2024-07-15 15:17:19.600791] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:41.536 [2024-07-15 15:17:19.600825] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.536 [2024-07-15 15:17:19.600839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:41.536 [2024-07-15 15:17:19.600850] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:41.536 [2024-07-15 15:17:19.600863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:41.536 [2024-07-15 15:17:19.600873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:41.536 [2024-07-15 15:17:19.600884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:41.536 [2024-07-15 15:17:19.600894] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:41.536 [2024-07-15 15:17:19.600904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:41.536 [2024-07-15 15:17:19.600915] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:41.536 [2024-07-15 15:17:19.600926] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:41.536 [2024-07-15 15:17:19.600936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:41.536 [2024-07-15 15:17:19.600947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:41.536 [2024-07-15 15:17:19.600959] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:41.536 [2024-07-15 15:17:19.600971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:41.536 [2024-07-15 15:17:19.600983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:41.536 [2024-07-15 15:17:19.601007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:41.536 [2024-07-15 15:17:19.601020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:41.536 [2024-07-15 15:17:19.601031] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:41.536 [2024-07-15 15:17:19.601042] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:41.536 [2024-07-15 15:17:19.601054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:41.536 [2024-07-15 15:17:19.601066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:41.536 [2024-07-15 15:17:19.601078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.536 [2024-07-15 15:17:19.601089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:41.536 [2024-07-15 15:17:19.601100] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:41.536 [2024-07-15 15:17:19.601111] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.536 [2024-07-15 15:17:19.601123] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:41.536 [2024-07-15 15:17:19.601137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:41.536 [2024-07-15 15:17:19.601149] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:41.536 [2024-07-15 15:17:19.601161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.536 [2024-07-15 15:17:19.601174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:41.536 [2024-07-15 15:17:19.601187] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:41.536 [2024-07-15 15:17:19.601198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:41.536 [2024-07-15 15:17:19.601208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:41.536 [2024-07-15 15:17:19.601219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:41.536 [2024-07-15 15:17:19.601231] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:41.536 [2024-07-15 15:17:19.601246] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:41.536 [2024-07-15 15:17:19.601266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:41.536 [2024-07-15 15:17:19.601280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:41.536 [2024-07-15 15:17:19.601293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:41.536 [2024-07-15 15:17:19.601305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:41.536 [2024-07-15 15:17:19.601317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:41.536 [2024-07-15 15:17:19.601329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:41.536 [2024-07-15 15:17:19.601340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:41.536 [2024-07-15 15:17:19.601353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:41.536 [2024-07-15 15:17:19.601366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:41.536 [2024-07-15 15:17:19.601378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:41.536 [2024-07-15 15:17:19.601390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:41.536 [2024-07-15 15:17:19.601402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:41.536 [2024-07-15 15:17:19.601413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:41.536 [2024-07-15 15:17:19.601427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:41.536 [2024-07-15 15:17:19.601439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:41.536 [2024-07-15 15:17:19.601451] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:41.536 [2024-07-15 15:17:19.601465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:41.536 [2024-07-15 15:17:19.601479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:41.536 [2024-07-15 15:17:19.601492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:41.536 [2024-07-15 15:17:19.601506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:41.536 [2024-07-15 15:17:19.601519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:41.536 [2024-07-15 15:17:19.601537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.536 [2024-07-15 15:17:19.601552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:41.536 [2024-07-15 15:17:19.601567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.034 ms 00:21:41.536 [2024-07-15 15:17:19.601579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.796 [2024-07-15 15:17:19.660892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.796 [2024-07-15 15:17:19.660950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:41.796 [2024-07-15 15:17:19.660963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.321 ms 00:21:41.796 [2024-07-15 15:17:19.660971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.796 [2024-07-15 15:17:19.661144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.796 [2024-07-15 15:17:19.661155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:41.796 [2024-07-15 15:17:19.661163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:41.796 [2024-07-15 15:17:19.661192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.796 [2024-07-15 15:17:19.713281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.796 [2024-07-15 15:17:19.713336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:41.797 [2024-07-15 15:17:19.713348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.164 ms 00:21:41.797 [2024-07-15 15:17:19.713357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.797 [2024-07-15 15:17:19.713489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.797 [2024-07-15 15:17:19.713499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:41.797 [2024-07-15 15:17:19.713508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:41.797 [2024-07-15 15:17:19.713515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.797 [2024-07-15 15:17:19.713950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.797 [2024-07-15 15:17:19.713968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:41.797 [2024-07-15 15:17:19.713978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:21:41.797 [2024-07-15 15:17:19.713985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.797 [2024-07-15 15:17:19.714117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.797 [2024-07-15 15:17:19.714138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:41.797 [2024-07-15 15:17:19.714147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:21:41.797 [2024-07-15 15:17:19.714154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.797 [2024-07-15 15:17:19.737012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.797 [2024-07-15 15:17:19.737062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:41.797 [2024-07-15 15:17:19.737076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.875 ms 00:21:41.797 [2024-07-15 15:17:19.737083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.797 [2024-07-15 15:17:19.759562] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:41.797 [2024-07-15 15:17:19.759625] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:41.797 [2024-07-15 15:17:19.759642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.797 [2024-07-15 15:17:19.759651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:41.797 [2024-07-15 15:17:19.759662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.441 ms 00:21:41.797 [2024-07-15 15:17:19.759669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.797 [2024-07-15 15:17:19.792030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.797 [2024-07-15 15:17:19.792140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:41.797 [2024-07-15 15:17:19.792157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.277 ms 00:21:41.797 [2024-07-15 15:17:19.792165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.797 [2024-07-15 15:17:19.813481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.797 [2024-07-15 15:17:19.813543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:41.797 [2024-07-15 15:17:19.813557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.202 ms 00:21:41.797 [2024-07-15 15:17:19.813564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.797 [2024-07-15 15:17:19.834384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.797 [2024-07-15 15:17:19.834452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:41.797 [2024-07-15 15:17:19.834464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.720 ms 00:21:41.797 [2024-07-15 15:17:19.834472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.797 [2024-07-15 15:17:19.835395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.797 [2024-07-15 15:17:19.835419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:41.797 [2024-07-15 15:17:19.835433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:21:41.797 [2024-07-15 15:17:19.835440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.054 [2024-07-15 15:17:19.932468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.054 [2024-07-15 15:17:19.932541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:42.054 [2024-07-15 15:17:19.932561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.179 ms 00:21:42.054 [2024-07-15 15:17:19.932569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.054 [2024-07-15 15:17:19.948624] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:42.054 [2024-07-15 15:17:19.965913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.054 [2024-07-15 15:17:19.965979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:42.054 [2024-07-15 15:17:19.966005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.250 ms 00:21:42.054 [2024-07-15 15:17:19.966013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.054 [2024-07-15 15:17:19.966132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.054 [2024-07-15 15:17:19.966142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:42.054 [2024-07-15 15:17:19.966151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:42.054 [2024-07-15 15:17:19.966162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.054 [2024-07-15 15:17:19.966236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.054 [2024-07-15 15:17:19.966245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:42.054 [2024-07-15 15:17:19.966252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:42.054 [2024-07-15 15:17:19.966260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.054 [2024-07-15 15:17:19.966280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.054 [2024-07-15 15:17:19.966288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:42.054 [2024-07-15 15:17:19.966296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:42.054 [2024-07-15 15:17:19.966303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.054 [2024-07-15 15:17:19.966337] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:42.054 [2024-07-15 15:17:19.966347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.054 [2024-07-15 15:17:19.966355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:42.054 [2024-07-15 15:17:19.966362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:42.054 [2024-07-15 15:17:19.966369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.054 [2024-07-15 15:17:20.009561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.054 [2024-07-15 15:17:20.009631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:42.054 [2024-07-15 15:17:20.009645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.250 ms 00:21:42.054 [2024-07-15 15:17:20.009663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.054 [2024-07-15 15:17:20.009849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.054 [2024-07-15 15:17:20.009860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:42.054 [2024-07-15 15:17:20.009869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:42.054 [2024-07-15 15:17:20.009876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.054 [2024-07-15 15:17:20.011097] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:42.054 [2024-07-15 15:17:20.017208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 453.682 ms, result 0 00:21:42.054 [2024-07-15 15:17:20.018111] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:42.054 [2024-07-15 15:17:20.038057] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:50.636  Copying: 29/256 [MB] (29 MBps) Copying: 59/256 [MB] (29 MBps) Copying: 90/256 [MB] (30 MBps) Copying: 120/256 [MB] (30 MBps) Copying: 149/256 [MB] (29 MBps) Copying: 180/256 [MB] (30 MBps) Copying: 210/256 [MB] (30 MBps) Copying: 240/256 [MB] (29 MBps) Copying: 256/256 [MB] (average 30 MBps)[2024-07-15 15:17:28.546954] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:50.636 [2024-07-15 15:17:28.562700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.636 [2024-07-15 15:17:28.562754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:50.636 [2024-07-15 15:17:28.562769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:50.636 [2024-07-15 15:17:28.562778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.636 [2024-07-15 15:17:28.562803] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:50.636 [2024-07-15 15:17:28.566601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.636 [2024-07-15 15:17:28.566627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:50.636 [2024-07-15 15:17:28.566637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.792 ms 00:21:50.636 [2024-07-15 15:17:28.566653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.636 [2024-07-15 15:17:28.568667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.636 [2024-07-15 15:17:28.568714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:50.636 [2024-07-15 15:17:28.568724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.994 ms 00:21:50.636 [2024-07-15 15:17:28.568731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.636 [2024-07-15 15:17:28.575248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.636 [2024-07-15 15:17:28.575287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:50.636 [2024-07-15 15:17:28.575298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.510 ms 00:21:50.636 [2024-07-15 15:17:28.575307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.636 [2024-07-15 15:17:28.581234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.636 [2024-07-15 15:17:28.581265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:50.636 [2024-07-15 15:17:28.581275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.873 ms 00:21:50.636 [2024-07-15 15:17:28.581282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.636 [2024-07-15 15:17:28.623191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.636 [2024-07-15 15:17:28.623257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:50.636 [2024-07-15 15:17:28.623271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.926 ms 00:21:50.636 [2024-07-15 15:17:28.623279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.636 [2024-07-15 15:17:28.647457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.636 [2024-07-15 15:17:28.647511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:50.636 [2024-07-15 15:17:28.647525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.125 ms 00:21:50.636 [2024-07-15 15:17:28.647532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.636 [2024-07-15 15:17:28.647693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.636 [2024-07-15 15:17:28.647707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:50.636 [2024-07-15 15:17:28.647728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:21:50.636 [2024-07-15 15:17:28.647736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.636 [2024-07-15 15:17:28.686770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.636 [2024-07-15 15:17:28.686816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:50.636 [2024-07-15 15:17:28.686829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.090 ms 00:21:50.636 [2024-07-15 15:17:28.686837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.636 [2024-07-15 15:17:28.727846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.636 [2024-07-15 15:17:28.727890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:50.636 [2024-07-15 15:17:28.727904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.027 ms 00:21:50.636 [2024-07-15 15:17:28.727923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.896 [2024-07-15 15:17:28.767973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.896 [2024-07-15 15:17:28.768025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:50.896 [2024-07-15 15:17:28.768038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.068 ms 00:21:50.896 [2024-07-15 15:17:28.768061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.896 [2024-07-15 15:17:28.805461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.896 [2024-07-15 15:17:28.805506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:50.896 [2024-07-15 15:17:28.805519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.382 ms 00:21:50.896 [2024-07-15 15:17:28.805526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.896 [2024-07-15 15:17:28.805583] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:50.896 [2024-07-15 15:17:28.805603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:50.896 [2024-07-15 15:17:28.805938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.805945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.805952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.805959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.805966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.805974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.805981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.805988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.805995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.806984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:50.897 [2024-07-15 15:17:28.807150] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:50.897 [2024-07-15 15:17:28.807166] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3a743e15-4abe-4742-97e2-4f048b457e12 00:21:50.897 [2024-07-15 15:17:28.807174] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:50.897 [2024-07-15 15:17:28.807183] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:50.897 [2024-07-15 15:17:28.807192] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:50.897 [2024-07-15 15:17:28.807215] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:50.897 [2024-07-15 15:17:28.807223] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:50.897 [2024-07-15 15:17:28.807232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:50.897 [2024-07-15 15:17:28.807240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:50.897 [2024-07-15 15:17:28.807247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:50.897 [2024-07-15 15:17:28.807254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:50.897 [2024-07-15 15:17:28.807263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.897 [2024-07-15 15:17:28.807272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:50.897 [2024-07-15 15:17:28.807282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.685 ms 00:21:50.897 [2024-07-15 15:17:28.807290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.897 [2024-07-15 15:17:28.827979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.897 [2024-07-15 15:17:28.828028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:50.897 [2024-07-15 15:17:28.828038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.696 ms 00:21:50.897 [2024-07-15 15:17:28.828045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.897 [2024-07-15 15:17:28.828542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.897 [2024-07-15 15:17:28.828556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:50.897 [2024-07-15 15:17:28.828565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:21:50.897 [2024-07-15 15:17:28.828577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.897 [2024-07-15 15:17:28.877475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.897 [2024-07-15 15:17:28.877520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:50.897 [2024-07-15 15:17:28.877531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.897 [2024-07-15 15:17:28.877540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.897 [2024-07-15 15:17:28.877627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.897 [2024-07-15 15:17:28.877636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:50.897 [2024-07-15 15:17:28.877644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.897 [2024-07-15 15:17:28.877656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.897 [2024-07-15 15:17:28.877707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.897 [2024-07-15 15:17:28.877718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:50.897 [2024-07-15 15:17:28.877725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.897 [2024-07-15 15:17:28.877732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.897 [2024-07-15 15:17:28.877750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.897 [2024-07-15 15:17:28.877758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:50.897 [2024-07-15 15:17:28.877765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.897 [2024-07-15 15:17:28.877772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.897 [2024-07-15 15:17:29.002265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.897 [2024-07-15 15:17:29.002325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:50.897 [2024-07-15 15:17:29.002337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.897 [2024-07-15 15:17:29.002361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.201 [2024-07-15 15:17:29.110081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.201 [2024-07-15 15:17:29.110141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:51.201 [2024-07-15 15:17:29.110154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.201 [2024-07-15 15:17:29.110177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.201 [2024-07-15 15:17:29.110257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.201 [2024-07-15 15:17:29.110266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:51.201 [2024-07-15 15:17:29.110273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.201 [2024-07-15 15:17:29.110280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.201 [2024-07-15 15:17:29.110307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.201 [2024-07-15 15:17:29.110315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:51.201 [2024-07-15 15:17:29.110322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.202 [2024-07-15 15:17:29.110329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.202 [2024-07-15 15:17:29.110425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.202 [2024-07-15 15:17:29.110436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:51.202 [2024-07-15 15:17:29.110449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.202 [2024-07-15 15:17:29.110456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.202 [2024-07-15 15:17:29.110507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.202 [2024-07-15 15:17:29.110517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:51.202 [2024-07-15 15:17:29.110525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.202 [2024-07-15 15:17:29.110533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.202 [2024-07-15 15:17:29.110570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.202 [2024-07-15 15:17:29.110582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:51.202 [2024-07-15 15:17:29.110591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.202 [2024-07-15 15:17:29.110598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.202 [2024-07-15 15:17:29.110643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.202 [2024-07-15 15:17:29.110652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:51.202 [2024-07-15 15:17:29.110659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.202 [2024-07-15 15:17:29.110666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.202 [2024-07-15 15:17:29.110810] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 549.165 ms, result 0 00:21:52.581 00:21:52.581 00:21:52.581 15:17:30 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:52.581 15:17:30 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=81873 00:21:52.581 15:17:30 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 81873 00:21:52.581 15:17:30 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81873 ']' 00:21:52.581 15:17:30 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.581 15:17:30 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.581 15:17:30 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.581 15:17:30 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.581 15:17:30 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:52.839 [2024-07-15 15:17:30.699195] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:21:52.839 [2024-07-15 15:17:30.699313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81873 ] 00:21:52.839 [2024-07-15 15:17:30.860138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.098 [2024-07-15 15:17:31.090918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.035 15:17:32 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:54.035 15:17:32 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:21:54.035 15:17:32 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:54.295 [2024-07-15 15:17:32.281343] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:54.295 [2024-07-15 15:17:32.281416] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:54.555 [2024-07-15 15:17:32.444380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.555 [2024-07-15 15:17:32.444440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:54.555 [2024-07-15 15:17:32.444454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:54.555 [2024-07-15 15:17:32.444479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.555 [2024-07-15 15:17:32.447505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.555 [2024-07-15 15:17:32.447550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:54.555 [2024-07-15 15:17:32.447563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.012 ms 00:21:54.555 [2024-07-15 15:17:32.447573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.555 [2024-07-15 15:17:32.447738] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:54.555 [2024-07-15 15:17:32.448944] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:54.555 [2024-07-15 15:17:32.448974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.555 [2024-07-15 15:17:32.448985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:54.555 [2024-07-15 15:17:32.449013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:21:54.555 [2024-07-15 15:17:32.449023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.555 [2024-07-15 15:17:32.450503] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:54.555 [2024-07-15 15:17:32.471850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.555 [2024-07-15 15:17:32.471888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:54.555 [2024-07-15 15:17:32.471901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.384 ms 00:21:54.555 [2024-07-15 15:17:32.471910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.555 [2024-07-15 15:17:32.472028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.555 [2024-07-15 15:17:32.472040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:54.555 [2024-07-15 15:17:32.472050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:54.555 [2024-07-15 15:17:32.472057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.555 [2024-07-15 15:17:32.478875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.555 [2024-07-15 15:17:32.478912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:54.555 [2024-07-15 15:17:32.478930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.777 ms 00:21:54.555 [2024-07-15 15:17:32.478938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.555 [2024-07-15 15:17:32.479084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.555 [2024-07-15 15:17:32.479099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:54.555 [2024-07-15 15:17:32.479112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:21:54.555 [2024-07-15 15:17:32.479120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.555 [2024-07-15 15:17:32.479160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.555 [2024-07-15 15:17:32.479170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:54.555 [2024-07-15 15:17:32.479181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:54.555 [2024-07-15 15:17:32.479189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.555 [2024-07-15 15:17:32.479219] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:54.555 [2024-07-15 15:17:32.484770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.555 [2024-07-15 15:17:32.484818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:54.555 [2024-07-15 15:17:32.484829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.573 ms 00:21:54.555 [2024-07-15 15:17:32.484839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.555 [2024-07-15 15:17:32.484899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.555 [2024-07-15 15:17:32.484912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:54.555 [2024-07-15 15:17:32.484920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:54.555 [2024-07-15 15:17:32.484932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.555 [2024-07-15 15:17:32.484951] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:54.555 [2024-07-15 15:17:32.484972] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:54.555 [2024-07-15 15:17:32.485022] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:54.555 [2024-07-15 15:17:32.485044] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:54.555 [2024-07-15 15:17:32.485123] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:54.555 [2024-07-15 15:17:32.485136] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:54.555 [2024-07-15 15:17:32.485148] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:54.555 [2024-07-15 15:17:32.485159] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:54.555 [2024-07-15 15:17:32.485167] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:54.555 [2024-07-15 15:17:32.485178] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:54.555 [2024-07-15 15:17:32.485189] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:54.555 [2024-07-15 15:17:32.485201] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:54.555 [2024-07-15 15:17:32.485210] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:54.555 [2024-07-15 15:17:32.485225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.555 [2024-07-15 15:17:32.485235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:54.555 [2024-07-15 15:17:32.485248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:21:54.555 [2024-07-15 15:17:32.485258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.555 [2024-07-15 15:17:32.485344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.555 [2024-07-15 15:17:32.485358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:54.555 [2024-07-15 15:17:32.485384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:54.555 [2024-07-15 15:17:32.485391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.555 [2024-07-15 15:17:32.485489] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:54.555 [2024-07-15 15:17:32.485502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:54.555 [2024-07-15 15:17:32.485512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:54.555 [2024-07-15 15:17:32.485520] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.555 [2024-07-15 15:17:32.485529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:54.555 [2024-07-15 15:17:32.485535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:54.555 [2024-07-15 15:17:32.485546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:54.555 [2024-07-15 15:17:32.485553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:54.555 [2024-07-15 15:17:32.485563] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:54.555 [2024-07-15 15:17:32.485569] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:54.555 [2024-07-15 15:17:32.485578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:54.555 [2024-07-15 15:17:32.485584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:54.555 [2024-07-15 15:17:32.485592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:54.555 [2024-07-15 15:17:32.485599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:54.555 [2024-07-15 15:17:32.485607] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:54.555 [2024-07-15 15:17:32.485614] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.555 [2024-07-15 15:17:32.485622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:54.555 [2024-07-15 15:17:32.485628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:54.555 [2024-07-15 15:17:32.485636] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.555 [2024-07-15 15:17:32.485642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:54.555 [2024-07-15 15:17:32.485650] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:54.555 [2024-07-15 15:17:32.485658] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.555 [2024-07-15 15:17:32.485666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:54.555 [2024-07-15 15:17:32.485672] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:54.555 [2024-07-15 15:17:32.485681] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.555 [2024-07-15 15:17:32.485687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:54.555 [2024-07-15 15:17:32.485695] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:54.556 [2024-07-15 15:17:32.485710] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.556 [2024-07-15 15:17:32.485719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:54.556 [2024-07-15 15:17:32.485727] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:54.556 [2024-07-15 15:17:32.485736] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.556 [2024-07-15 15:17:32.485743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:54.556 [2024-07-15 15:17:32.485751] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:54.556 [2024-07-15 15:17:32.485759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:54.556 [2024-07-15 15:17:32.485769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:54.556 [2024-07-15 15:17:32.485775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:54.556 [2024-07-15 15:17:32.485783] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:54.556 [2024-07-15 15:17:32.485790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:54.556 [2024-07-15 15:17:32.485798] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:54.556 [2024-07-15 15:17:32.485804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.556 [2024-07-15 15:17:32.485832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:54.556 [2024-07-15 15:17:32.485839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:54.556 [2024-07-15 15:17:32.485847] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.556 [2024-07-15 15:17:32.485853] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:54.556 [2024-07-15 15:17:32.485864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:54.556 [2024-07-15 15:17:32.485871] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:54.556 [2024-07-15 15:17:32.485880] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.556 [2024-07-15 15:17:32.485887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:54.556 [2024-07-15 15:17:32.485895] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:54.556 [2024-07-15 15:17:32.485903] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:54.556 [2024-07-15 15:17:32.485911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:54.556 [2024-07-15 15:17:32.485917] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:54.556 [2024-07-15 15:17:32.485926] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:54.556 [2024-07-15 15:17:32.485935] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:54.556 [2024-07-15 15:17:32.485946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:54.556 [2024-07-15 15:17:32.485954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:54.556 [2024-07-15 15:17:32.485966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:54.556 [2024-07-15 15:17:32.485974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:54.556 [2024-07-15 15:17:32.485983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:54.556 [2024-07-15 15:17:32.485990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:54.556 [2024-07-15 15:17:32.485999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:54.556 [2024-07-15 15:17:32.486015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:54.556 [2024-07-15 15:17:32.486025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:54.556 [2024-07-15 15:17:32.486032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:54.556 [2024-07-15 15:17:32.486041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:54.556 [2024-07-15 15:17:32.486048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:54.556 [2024-07-15 15:17:32.486059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:54.556 [2024-07-15 15:17:32.486066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:54.556 [2024-07-15 15:17:32.486075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:54.556 [2024-07-15 15:17:32.486082] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:54.556 [2024-07-15 15:17:32.486091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:54.556 [2024-07-15 15:17:32.486099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:54.556 [2024-07-15 15:17:32.486110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:54.556 [2024-07-15 15:17:32.486117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:54.556 [2024-07-15 15:17:32.486126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:54.556 [2024-07-15 15:17:32.486134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.556 [2024-07-15 15:17:32.486143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:54.556 [2024-07-15 15:17:32.486150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:21:54.556 [2024-07-15 15:17:32.486159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.556 [2024-07-15 15:17:32.532625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.556 [2024-07-15 15:17:32.532762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:54.556 [2024-07-15 15:17:32.532799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.488 ms 00:21:54.556 [2024-07-15 15:17:32.532826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.556 [2024-07-15 15:17:32.533045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.556 [2024-07-15 15:17:32.533090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:54.556 [2024-07-15 15:17:32.533120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:54.556 [2024-07-15 15:17:32.533156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.556 [2024-07-15 15:17:32.586556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.556 [2024-07-15 15:17:32.586688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:54.556 [2024-07-15 15:17:32.586725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.453 ms 00:21:54.556 [2024-07-15 15:17:32.586751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.556 [2024-07-15 15:17:32.586886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.556 [2024-07-15 15:17:32.586928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:54.556 [2024-07-15 15:17:32.586971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:54.556 [2024-07-15 15:17:32.587026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.556 [2024-07-15 15:17:32.587503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.556 [2024-07-15 15:17:32.587554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:54.556 [2024-07-15 15:17:32.587591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:21:54.556 [2024-07-15 15:17:32.587629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.556 [2024-07-15 15:17:32.587779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.556 [2024-07-15 15:17:32.587833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:54.556 [2024-07-15 15:17:32.587864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:21:54.556 [2024-07-15 15:17:32.587895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.556 [2024-07-15 15:17:32.611230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.556 [2024-07-15 15:17:32.611339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:54.556 [2024-07-15 15:17:32.611376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.328 ms 00:21:54.556 [2024-07-15 15:17:32.611403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.556 [2024-07-15 15:17:32.632455] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:54.556 [2024-07-15 15:17:32.632549] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:54.556 [2024-07-15 15:17:32.632588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.556 [2024-07-15 15:17:32.632611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:54.556 [2024-07-15 15:17:32.632632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.058 ms 00:21:54.556 [2024-07-15 15:17:32.632653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.556 [2024-07-15 15:17:32.663249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.556 [2024-07-15 15:17:32.663344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:54.556 [2024-07-15 15:17:32.663380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.533 ms 00:21:54.556 [2024-07-15 15:17:32.663406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.816 [2024-07-15 15:17:32.683689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.816 [2024-07-15 15:17:32.683795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:54.816 [2024-07-15 15:17:32.683836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.227 ms 00:21:54.816 [2024-07-15 15:17:32.683859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.816 [2024-07-15 15:17:32.704635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.816 [2024-07-15 15:17:32.704728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:54.816 [2024-07-15 15:17:32.704757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.713 ms 00:21:54.816 [2024-07-15 15:17:32.704779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.816 [2024-07-15 15:17:32.705730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.816 [2024-07-15 15:17:32.705794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:54.816 [2024-07-15 15:17:32.705826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:21:54.816 [2024-07-15 15:17:32.705849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.816 [2024-07-15 15:17:32.806934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.816 [2024-07-15 15:17:32.807079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:54.816 [2024-07-15 15:17:32.807123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.207 ms 00:21:54.816 [2024-07-15 15:17:32.807150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.816 [2024-07-15 15:17:32.822820] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:54.816 [2024-07-15 15:17:32.840102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.816 [2024-07-15 15:17:32.840209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:54.816 [2024-07-15 15:17:32.840242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.835 ms 00:21:54.816 [2024-07-15 15:17:32.840265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.816 [2024-07-15 15:17:32.840387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.816 [2024-07-15 15:17:32.840413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:54.816 [2024-07-15 15:17:32.840442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:54.816 [2024-07-15 15:17:32.840466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.816 [2024-07-15 15:17:32.840543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.816 [2024-07-15 15:17:32.840574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:54.816 [2024-07-15 15:17:32.840602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:54.816 [2024-07-15 15:17:32.840622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.816 [2024-07-15 15:17:32.840681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.816 [2024-07-15 15:17:32.840710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:54.816 [2024-07-15 15:17:32.840724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:54.816 [2024-07-15 15:17:32.840731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.816 [2024-07-15 15:17:32.840766] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:54.816 [2024-07-15 15:17:32.840776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.816 [2024-07-15 15:17:32.840787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:54.816 [2024-07-15 15:17:32.840795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:54.816 [2024-07-15 15:17:32.840803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.816 [2024-07-15 15:17:32.881562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.816 [2024-07-15 15:17:32.881681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:54.816 [2024-07-15 15:17:32.881714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.813 ms 00:21:54.816 [2024-07-15 15:17:32.881737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.816 [2024-07-15 15:17:32.881879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.817 [2024-07-15 15:17:32.881927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:54.817 [2024-07-15 15:17:32.881958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:54.817 [2024-07-15 15:17:32.881985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.817 [2024-07-15 15:17:32.883159] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:54.817 [2024-07-15 15:17:32.889304] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 439.249 ms, result 0 00:21:54.817 [2024-07-15 15:17:32.890299] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:54.817 Some configs were skipped because the RPC state that can call them passed over. 00:21:55.076 15:17:32 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:55.076 [2024-07-15 15:17:33.123400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.076 [2024-07-15 15:17:33.123550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:55.076 [2024-07-15 15:17:33.123595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.166 ms 00:21:55.076 [2024-07-15 15:17:33.123620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.076 [2024-07-15 15:17:33.123683] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.464 ms, result 0 00:21:55.076 true 00:21:55.076 15:17:33 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:55.334 [2024-07-15 15:17:33.319116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.334 [2024-07-15 15:17:33.319186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:55.334 [2024-07-15 15:17:33.319202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:21:55.334 [2024-07-15 15:17:33.319213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.334 [2024-07-15 15:17:33.319254] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.206 ms, result 0 00:21:55.334 true 00:21:55.334 15:17:33 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 81873 00:21:55.334 15:17:33 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81873 ']' 00:21:55.334 15:17:33 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81873 00:21:55.334 15:17:33 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:21:55.334 15:17:33 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:55.334 15:17:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81873 00:21:55.334 killing process with pid 81873 00:21:55.334 15:17:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:55.334 15:17:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:55.334 15:17:33 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81873' 00:21:55.334 15:17:33 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81873 00:21:55.334 15:17:33 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81873 00:21:56.714 [2024-07-15 15:17:34.570405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.714 [2024-07-15 15:17:34.570474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:56.714 [2024-07-15 15:17:34.570488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:56.714 [2024-07-15 15:17:34.570497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.714 [2024-07-15 15:17:34.570537] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:56.714 [2024-07-15 15:17:34.574829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.714 [2024-07-15 15:17:34.574867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:56.714 [2024-07-15 15:17:34.574878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.285 ms 00:21:56.714 [2024-07-15 15:17:34.574888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.714 [2024-07-15 15:17:34.575169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.714 [2024-07-15 15:17:34.575184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:56.714 [2024-07-15 15:17:34.575193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:21:56.714 [2024-07-15 15:17:34.575202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.714 [2024-07-15 15:17:34.578788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.714 [2024-07-15 15:17:34.578827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:56.714 [2024-07-15 15:17:34.578841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.577 ms 00:21:56.714 [2024-07-15 15:17:34.578851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.714 [2024-07-15 15:17:34.584963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.714 [2024-07-15 15:17:34.585012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:56.714 [2024-07-15 15:17:34.585022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.087 ms 00:21:56.714 [2024-07-15 15:17:34.585033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.714 [2024-07-15 15:17:34.601869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.714 [2024-07-15 15:17:34.601937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:56.714 [2024-07-15 15:17:34.601951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.784 ms 00:21:56.714 [2024-07-15 15:17:34.601963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.714 [2024-07-15 15:17:34.613213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.714 [2024-07-15 15:17:34.613269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:56.714 [2024-07-15 15:17:34.613286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.204 ms 00:21:56.714 [2024-07-15 15:17:34.613295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.714 [2024-07-15 15:17:34.613430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.714 [2024-07-15 15:17:34.613444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:56.714 [2024-07-15 15:17:34.613452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:21:56.714 [2024-07-15 15:17:34.613478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.714 [2024-07-15 15:17:34.630035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.714 [2024-07-15 15:17:34.630084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:56.714 [2024-07-15 15:17:34.630097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.566 ms 00:21:56.714 [2024-07-15 15:17:34.630106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.714 [2024-07-15 15:17:34.646936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.714 [2024-07-15 15:17:34.647001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:56.714 [2024-07-15 15:17:34.647016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.804 ms 00:21:56.714 [2024-07-15 15:17:34.647032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.714 [2024-07-15 15:17:34.663189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.714 [2024-07-15 15:17:34.663269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:56.714 [2024-07-15 15:17:34.663283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.134 ms 00:21:56.714 [2024-07-15 15:17:34.663292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.714 [2024-07-15 15:17:34.679746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.714 [2024-07-15 15:17:34.679810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:56.714 [2024-07-15 15:17:34.679823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.397 ms 00:21:56.714 [2024-07-15 15:17:34.679832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.714 [2024-07-15 15:17:34.679879] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:56.714 [2024-07-15 15:17:34.679899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:56.714 [2024-07-15 15:17:34.679909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:56.714 [2024-07-15 15:17:34.679919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:56.714 [2024-07-15 15:17:34.679927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:56.714 [2024-07-15 15:17:34.679937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:56.714 [2024-07-15 15:17:34.679945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:56.714 [2024-07-15 15:17:34.679957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:56.714 [2024-07-15 15:17:34.679965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:56.714 [2024-07-15 15:17:34.679975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:56.714 [2024-07-15 15:17:34.679983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:56.714 [2024-07-15 15:17:34.680005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:56.714 [2024-07-15 15:17:34.680014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:56.715 [2024-07-15 15:17:34.680810] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:56.715 [2024-07-15 15:17:34.680818] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3a743e15-4abe-4742-97e2-4f048b457e12 00:21:56.715 [2024-07-15 15:17:34.680832] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:56.715 [2024-07-15 15:17:34.680839] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:56.715 [2024-07-15 15:17:34.680848] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:56.715 [2024-07-15 15:17:34.680856] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:56.715 [2024-07-15 15:17:34.680865] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:56.715 [2024-07-15 15:17:34.680873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:56.715 [2024-07-15 15:17:34.680882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:56.715 [2024-07-15 15:17:34.680889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:56.716 [2024-07-15 15:17:34.680915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:56.716 [2024-07-15 15:17:34.680923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.716 [2024-07-15 15:17:34.680933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:56.716 [2024-07-15 15:17:34.680941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:21:56.716 [2024-07-15 15:17:34.680950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.716 [2024-07-15 15:17:34.702925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.716 [2024-07-15 15:17:34.702988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:56.716 [2024-07-15 15:17:34.703031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.989 ms 00:21:56.716 [2024-07-15 15:17:34.703045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.716 [2024-07-15 15:17:34.703723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.716 [2024-07-15 15:17:34.703746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:56.716 [2024-07-15 15:17:34.703759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:21:56.716 [2024-07-15 15:17:34.703772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.716 [2024-07-15 15:17:34.776625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.716 [2024-07-15 15:17:34.776703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:56.716 [2024-07-15 15:17:34.776715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.716 [2024-07-15 15:17:34.776725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.716 [2024-07-15 15:17:34.776842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.716 [2024-07-15 15:17:34.776853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:56.716 [2024-07-15 15:17:34.776862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.716 [2024-07-15 15:17:34.776874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.716 [2024-07-15 15:17:34.776932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.716 [2024-07-15 15:17:34.776945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:56.716 [2024-07-15 15:17:34.776953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.716 [2024-07-15 15:17:34.776965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.716 [2024-07-15 15:17:34.776983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.716 [2024-07-15 15:17:34.777010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:56.716 [2024-07-15 15:17:34.777018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.716 [2024-07-15 15:17:34.777027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.976 [2024-07-15 15:17:34.902429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.976 [2024-07-15 15:17:34.902500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:56.976 [2024-07-15 15:17:34.902514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.976 [2024-07-15 15:17:34.902523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.976 [2024-07-15 15:17:35.011496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.976 [2024-07-15 15:17:35.011562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:56.976 [2024-07-15 15:17:35.011575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.976 [2024-07-15 15:17:35.011585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.976 [2024-07-15 15:17:35.011671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.976 [2024-07-15 15:17:35.011683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:56.976 [2024-07-15 15:17:35.011691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.976 [2024-07-15 15:17:35.011701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.976 [2024-07-15 15:17:35.011729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.976 [2024-07-15 15:17:35.011750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:56.976 [2024-07-15 15:17:35.011757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.976 [2024-07-15 15:17:35.011766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.976 [2024-07-15 15:17:35.011872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.976 [2024-07-15 15:17:35.011884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:56.976 [2024-07-15 15:17:35.011893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.976 [2024-07-15 15:17:35.011901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.976 [2024-07-15 15:17:35.011933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.976 [2024-07-15 15:17:35.011945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:56.976 [2024-07-15 15:17:35.011952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.976 [2024-07-15 15:17:35.011961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.976 [2024-07-15 15:17:35.011997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.976 [2024-07-15 15:17:35.012134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:56.976 [2024-07-15 15:17:35.012156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.976 [2024-07-15 15:17:35.012177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.976 [2024-07-15 15:17:35.012242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.976 [2024-07-15 15:17:35.012295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:56.976 [2024-07-15 15:17:35.012338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.976 [2024-07-15 15:17:35.012364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.976 [2024-07-15 15:17:35.012522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 442.955 ms, result 0 00:21:58.365 15:17:36 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:58.365 15:17:36 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:58.365 [2024-07-15 15:17:36.172019] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:21:58.365 [2024-07-15 15:17:36.172226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81937 ] 00:21:58.365 [2024-07-15 15:17:36.334071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.623 [2024-07-15 15:17:36.572561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.880 [2024-07-15 15:17:36.972046] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:58.880 [2024-07-15 15:17:36.972107] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:59.139 [2024-07-15 15:17:37.129916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.139 [2024-07-15 15:17:37.129971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:59.139 [2024-07-15 15:17:37.129985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:59.140 [2024-07-15 15:17:37.130005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.140 [2024-07-15 15:17:37.133329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.140 [2024-07-15 15:17:37.133382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:59.140 [2024-07-15 15:17:37.133399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.310 ms 00:21:59.140 [2024-07-15 15:17:37.133410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.140 [2024-07-15 15:17:37.133544] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:59.140 [2024-07-15 15:17:37.134758] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:59.140 [2024-07-15 15:17:37.134814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.140 [2024-07-15 15:17:37.134831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:59.140 [2024-07-15 15:17:37.134847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:21:59.140 [2024-07-15 15:17:37.134860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.140 [2024-07-15 15:17:37.136658] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:59.140 [2024-07-15 15:17:37.159831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.140 [2024-07-15 15:17:37.159968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:59.140 [2024-07-15 15:17:37.160013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.219 ms 00:21:59.140 [2024-07-15 15:17:37.160036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.140 [2024-07-15 15:17:37.160150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.140 [2024-07-15 15:17:37.160193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:59.140 [2024-07-15 15:17:37.160228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:59.140 [2024-07-15 15:17:37.160257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.140 [2024-07-15 15:17:37.167091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.140 [2024-07-15 15:17:37.167121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:59.140 [2024-07-15 15:17:37.167132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.780 ms 00:21:59.140 [2024-07-15 15:17:37.167156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.140 [2024-07-15 15:17:37.167278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.140 [2024-07-15 15:17:37.167295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:59.140 [2024-07-15 15:17:37.167305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:21:59.140 [2024-07-15 15:17:37.167313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.140 [2024-07-15 15:17:37.167352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.140 [2024-07-15 15:17:37.167363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:59.140 [2024-07-15 15:17:37.167372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:59.140 [2024-07-15 15:17:37.167383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.140 [2024-07-15 15:17:37.167410] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:59.140 [2024-07-15 15:17:37.173617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.140 [2024-07-15 15:17:37.173645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:59.140 [2024-07-15 15:17:37.173655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.227 ms 00:21:59.140 [2024-07-15 15:17:37.173663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.140 [2024-07-15 15:17:37.173725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.140 [2024-07-15 15:17:37.173735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:59.140 [2024-07-15 15:17:37.173743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:59.140 [2024-07-15 15:17:37.173751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.140 [2024-07-15 15:17:37.173770] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:59.140 [2024-07-15 15:17:37.173791] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:59.140 [2024-07-15 15:17:37.173825] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:59.140 [2024-07-15 15:17:37.173840] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:59.140 [2024-07-15 15:17:37.173922] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:59.140 [2024-07-15 15:17:37.173932] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:59.140 [2024-07-15 15:17:37.173942] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:59.140 [2024-07-15 15:17:37.173952] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:59.140 [2024-07-15 15:17:37.173961] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:59.140 [2024-07-15 15:17:37.173968] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:59.140 [2024-07-15 15:17:37.173978] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:59.140 [2024-07-15 15:17:37.173986] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:59.140 [2024-07-15 15:17:37.174007] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:59.140 [2024-07-15 15:17:37.174015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.140 [2024-07-15 15:17:37.174023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:59.140 [2024-07-15 15:17:37.174032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:21:59.140 [2024-07-15 15:17:37.174039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.140 [2024-07-15 15:17:37.174123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.140 [2024-07-15 15:17:37.174131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:59.140 [2024-07-15 15:17:37.174138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:59.140 [2024-07-15 15:17:37.174148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.140 [2024-07-15 15:17:37.174231] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:59.140 [2024-07-15 15:17:37.174239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:59.140 [2024-07-15 15:17:37.174247] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:59.140 [2024-07-15 15:17:37.174254] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.140 [2024-07-15 15:17:37.174261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:59.140 [2024-07-15 15:17:37.174283] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:59.140 [2024-07-15 15:17:37.174291] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:59.140 [2024-07-15 15:17:37.174297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:59.140 [2024-07-15 15:17:37.174304] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:59.140 [2024-07-15 15:17:37.174311] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:59.140 [2024-07-15 15:17:37.174318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:59.140 [2024-07-15 15:17:37.174325] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:59.140 [2024-07-15 15:17:37.174331] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:59.140 [2024-07-15 15:17:37.174338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:59.140 [2024-07-15 15:17:37.174345] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:59.140 [2024-07-15 15:17:37.174351] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.140 [2024-07-15 15:17:37.174357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:59.140 [2024-07-15 15:17:37.174364] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:59.140 [2024-07-15 15:17:37.174383] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.140 [2024-07-15 15:17:37.174407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:59.140 [2024-07-15 15:17:37.174414] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:59.140 [2024-07-15 15:17:37.174421] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:59.140 [2024-07-15 15:17:37.174428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:59.140 [2024-07-15 15:17:37.174436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:59.140 [2024-07-15 15:17:37.174443] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:59.140 [2024-07-15 15:17:37.174459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:59.140 [2024-07-15 15:17:37.174467] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:59.140 [2024-07-15 15:17:37.174491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:59.140 [2024-07-15 15:17:37.174499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:59.140 [2024-07-15 15:17:37.174506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:59.140 [2024-07-15 15:17:37.174514] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:59.140 [2024-07-15 15:17:37.174522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:59.140 [2024-07-15 15:17:37.174530] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:59.140 [2024-07-15 15:17:37.174537] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:59.140 [2024-07-15 15:17:37.174544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:59.140 [2024-07-15 15:17:37.174552] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:59.140 [2024-07-15 15:17:37.174559] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:59.140 [2024-07-15 15:17:37.174568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:59.140 [2024-07-15 15:17:37.174575] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:59.140 [2024-07-15 15:17:37.174583] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.140 [2024-07-15 15:17:37.174590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:59.140 [2024-07-15 15:17:37.174597] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:59.140 [2024-07-15 15:17:37.174604] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.140 [2024-07-15 15:17:37.174612] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:59.140 [2024-07-15 15:17:37.174620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:59.140 [2024-07-15 15:17:37.174628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:59.140 [2024-07-15 15:17:37.174637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:59.140 [2024-07-15 15:17:37.174645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:59.140 [2024-07-15 15:17:37.174653] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:59.140 [2024-07-15 15:17:37.174660] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:59.140 [2024-07-15 15:17:37.174668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:59.140 [2024-07-15 15:17:37.174676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:59.140 [2024-07-15 15:17:37.174684] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:59.140 [2024-07-15 15:17:37.174693] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:59.140 [2024-07-15 15:17:37.174707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:59.140 [2024-07-15 15:17:37.174717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:59.140 [2024-07-15 15:17:37.174726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:59.140 [2024-07-15 15:17:37.174735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:59.140 [2024-07-15 15:17:37.174743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:59.140 [2024-07-15 15:17:37.174752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:59.140 [2024-07-15 15:17:37.174760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:59.140 [2024-07-15 15:17:37.174768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:59.140 [2024-07-15 15:17:37.174776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:59.140 [2024-07-15 15:17:37.174785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:59.140 [2024-07-15 15:17:37.174793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:59.140 [2024-07-15 15:17:37.174802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:59.140 [2024-07-15 15:17:37.174810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:59.140 [2024-07-15 15:17:37.174817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:59.140 [2024-07-15 15:17:37.174826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:59.140 [2024-07-15 15:17:37.174834] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:59.140 [2024-07-15 15:17:37.174843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:59.140 [2024-07-15 15:17:37.174852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:59.140 [2024-07-15 15:17:37.174861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:59.140 [2024-07-15 15:17:37.174869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:59.140 [2024-07-15 15:17:37.174877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:59.140 [2024-07-15 15:17:37.174886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.140 [2024-07-15 15:17:37.174896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:59.140 [2024-07-15 15:17:37.174905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:21:59.140 [2024-07-15 15:17:37.174913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.140 [2024-07-15 15:17:37.229485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.140 [2024-07-15 15:17:37.229548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:59.140 [2024-07-15 15:17:37.229561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.611 ms 00:21:59.140 [2024-07-15 15:17:37.229585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.140 [2024-07-15 15:17:37.229774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.140 [2024-07-15 15:17:37.229785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:59.140 [2024-07-15 15:17:37.229793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:21:59.140 [2024-07-15 15:17:37.229805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.400 [2024-07-15 15:17:37.280872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.400 [2024-07-15 15:17:37.280923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:59.400 [2024-07-15 15:17:37.280936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.142 ms 00:21:59.400 [2024-07-15 15:17:37.280943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.400 [2024-07-15 15:17:37.281059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.400 [2024-07-15 15:17:37.281071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:59.400 [2024-07-15 15:17:37.281079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:59.400 [2024-07-15 15:17:37.281086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.400 [2024-07-15 15:17:37.281503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.400 [2024-07-15 15:17:37.281518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:59.400 [2024-07-15 15:17:37.281527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:21:59.400 [2024-07-15 15:17:37.281533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.400 [2024-07-15 15:17:37.281664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.400 [2024-07-15 15:17:37.281679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:59.400 [2024-07-15 15:17:37.281687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:21:59.400 [2024-07-15 15:17:37.281694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.400 [2024-07-15 15:17:37.303163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.400 [2024-07-15 15:17:37.303212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:59.400 [2024-07-15 15:17:37.303225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.486 ms 00:21:59.400 [2024-07-15 15:17:37.303234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.400 [2024-07-15 15:17:37.324150] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:59.400 [2024-07-15 15:17:37.324194] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:59.400 [2024-07-15 15:17:37.324208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.400 [2024-07-15 15:17:37.324232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:59.400 [2024-07-15 15:17:37.324242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.857 ms 00:21:59.400 [2024-07-15 15:17:37.324250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.400 [2024-07-15 15:17:37.359152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.400 [2024-07-15 15:17:37.359229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:59.400 [2024-07-15 15:17:37.359245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.867 ms 00:21:59.400 [2024-07-15 15:17:37.359255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.400 [2024-07-15 15:17:37.380816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.400 [2024-07-15 15:17:37.380863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:59.400 [2024-07-15 15:17:37.380875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.435 ms 00:21:59.400 [2024-07-15 15:17:37.380882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.400 [2024-07-15 15:17:37.400854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.400 [2024-07-15 15:17:37.400892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:59.400 [2024-07-15 15:17:37.400904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.918 ms 00:21:59.400 [2024-07-15 15:17:37.400910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.400 [2024-07-15 15:17:37.401765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.400 [2024-07-15 15:17:37.401797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:59.400 [2024-07-15 15:17:37.401807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:21:59.400 [2024-07-15 15:17:37.401814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.400 [2024-07-15 15:17:37.496468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.400 [2024-07-15 15:17:37.496529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:59.400 [2024-07-15 15:17:37.496544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.808 ms 00:21:59.400 [2024-07-15 15:17:37.496552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.400 [2024-07-15 15:17:37.509562] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:59.659 [2024-07-15 15:17:37.526374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.659 [2024-07-15 15:17:37.526419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:59.659 [2024-07-15 15:17:37.526433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.740 ms 00:21:59.659 [2024-07-15 15:17:37.526442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.659 [2024-07-15 15:17:37.526557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.659 [2024-07-15 15:17:37.526569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:59.659 [2024-07-15 15:17:37.526581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:59.659 [2024-07-15 15:17:37.526589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.659 [2024-07-15 15:17:37.526644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.659 [2024-07-15 15:17:37.526653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:59.659 [2024-07-15 15:17:37.526661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:59.659 [2024-07-15 15:17:37.526668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.659 [2024-07-15 15:17:37.526689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.659 [2024-07-15 15:17:37.526697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:59.659 [2024-07-15 15:17:37.526705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:59.659 [2024-07-15 15:17:37.526715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.659 [2024-07-15 15:17:37.526747] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:59.659 [2024-07-15 15:17:37.526756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.659 [2024-07-15 15:17:37.526763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:59.659 [2024-07-15 15:17:37.526771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:59.660 [2024-07-15 15:17:37.526778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.660 [2024-07-15 15:17:37.568413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.660 [2024-07-15 15:17:37.568455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:59.660 [2024-07-15 15:17:37.568473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.695 ms 00:21:59.660 [2024-07-15 15:17:37.568497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.660 [2024-07-15 15:17:37.568619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.660 [2024-07-15 15:17:37.568631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:59.660 [2024-07-15 15:17:37.568640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:59.660 [2024-07-15 15:17:37.568647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.660 [2024-07-15 15:17:37.569657] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:59.660 [2024-07-15 15:17:37.575448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 440.250 ms, result 0 00:21:59.660 [2024-07-15 15:17:37.576286] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:59.660 [2024-07-15 15:17:37.595101] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:07.667  Copying: 35/256 [MB] (35 MBps) Copying: 69/256 [MB] (33 MBps) Copying: 101/256 [MB] (32 MBps) Copying: 134/256 [MB] (32 MBps) Copying: 164/256 [MB] (30 MBps) Copying: 196/256 [MB] (31 MBps) Copying: 227/256 [MB] (31 MBps) Copying: 256/256 [MB] (average 32 MBps)[2024-07-15 15:17:45.491894] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:07.667 [2024-07-15 15:17:45.507985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.667 [2024-07-15 15:17:45.508065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:07.667 [2024-07-15 15:17:45.508080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:07.667 [2024-07-15 15:17:45.508088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.667 [2024-07-15 15:17:45.508116] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:07.667 [2024-07-15 15:17:45.512051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.667 [2024-07-15 15:17:45.512093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:07.667 [2024-07-15 15:17:45.512104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.926 ms 00:22:07.667 [2024-07-15 15:17:45.512111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.667 [2024-07-15 15:17:45.512354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.667 [2024-07-15 15:17:45.512364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:07.667 [2024-07-15 15:17:45.512372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:22:07.667 [2024-07-15 15:17:45.512378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.667 [2024-07-15 15:17:45.515280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.667 [2024-07-15 15:17:45.515298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:07.667 [2024-07-15 15:17:45.515307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.893 ms 00:22:07.667 [2024-07-15 15:17:45.515319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.667 [2024-07-15 15:17:45.520776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.667 [2024-07-15 15:17:45.520804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:07.667 [2024-07-15 15:17:45.520813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.449 ms 00:22:07.667 [2024-07-15 15:17:45.520821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.667 [2024-07-15 15:17:45.559874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.667 [2024-07-15 15:17:45.559913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:07.667 [2024-07-15 15:17:45.559925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.059 ms 00:22:07.667 [2024-07-15 15:17:45.559933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.667 [2024-07-15 15:17:45.583070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.667 [2024-07-15 15:17:45.583132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:07.667 [2024-07-15 15:17:45.583145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.138 ms 00:22:07.667 [2024-07-15 15:17:45.583153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.667 [2024-07-15 15:17:45.583300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.667 [2024-07-15 15:17:45.583311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:07.667 [2024-07-15 15:17:45.583320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:07.667 [2024-07-15 15:17:45.583327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.667 [2024-07-15 15:17:45.622257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.667 [2024-07-15 15:17:45.622295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:07.667 [2024-07-15 15:17:45.622306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.988 ms 00:22:07.667 [2024-07-15 15:17:45.622330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.667 [2024-07-15 15:17:45.661508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.667 [2024-07-15 15:17:45.661566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:07.667 [2024-07-15 15:17:45.661578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.212 ms 00:22:07.667 [2024-07-15 15:17:45.661586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.667 [2024-07-15 15:17:45.698741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.667 [2024-07-15 15:17:45.698774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:07.667 [2024-07-15 15:17:45.698784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.168 ms 00:22:07.667 [2024-07-15 15:17:45.698791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.667 [2024-07-15 15:17:45.737391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.667 [2024-07-15 15:17:45.737425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:07.667 [2024-07-15 15:17:45.737436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.606 ms 00:22:07.667 [2024-07-15 15:17:45.737444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.667 [2024-07-15 15:17:45.737483] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:07.667 [2024-07-15 15:17:45.737511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:07.667 [2024-07-15 15:17:45.737530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:07.667 [2024-07-15 15:17:45.737538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:07.667 [2024-07-15 15:17:45.737546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:07.667 [2024-07-15 15:17:45.737554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:07.667 [2024-07-15 15:17:45.737563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:07.667 [2024-07-15 15:17:45.737570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:07.667 [2024-07-15 15:17:45.737578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.737986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:07.668 [2024-07-15 15:17:45.738154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:07.669 [2024-07-15 15:17:45.738290] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:07.669 [2024-07-15 15:17:45.738297] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3a743e15-4abe-4742-97e2-4f048b457e12 00:22:07.669 [2024-07-15 15:17:45.738305] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:07.669 [2024-07-15 15:17:45.738311] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:07.669 [2024-07-15 15:17:45.738328] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:07.669 [2024-07-15 15:17:45.738336] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:07.669 [2024-07-15 15:17:45.738343] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:07.669 [2024-07-15 15:17:45.738350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:07.669 [2024-07-15 15:17:45.738357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:07.669 [2024-07-15 15:17:45.738364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:07.669 [2024-07-15 15:17:45.738370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:07.669 [2024-07-15 15:17:45.738378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.669 [2024-07-15 15:17:45.738385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:07.669 [2024-07-15 15:17:45.738393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:22:07.669 [2024-07-15 15:17:45.738403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.669 [2024-07-15 15:17:45.760433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.669 [2024-07-15 15:17:45.760474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:07.669 [2024-07-15 15:17:45.760484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.050 ms 00:22:07.669 [2024-07-15 15:17:45.760492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.669 [2024-07-15 15:17:45.761102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.669 [2024-07-15 15:17:45.761117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:07.669 [2024-07-15 15:17:45.761134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:22:07.669 [2024-07-15 15:17:45.761142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.929 [2024-07-15 15:17:45.812271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.929 [2024-07-15 15:17:45.812329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:07.929 [2024-07-15 15:17:45.812340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.929 [2024-07-15 15:17:45.812348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.929 [2024-07-15 15:17:45.812439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.929 [2024-07-15 15:17:45.812448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:07.929 [2024-07-15 15:17:45.812461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.929 [2024-07-15 15:17:45.812468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.929 [2024-07-15 15:17:45.812519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.929 [2024-07-15 15:17:45.812530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:07.929 [2024-07-15 15:17:45.812538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.929 [2024-07-15 15:17:45.812545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.929 [2024-07-15 15:17:45.812562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.929 [2024-07-15 15:17:45.812570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:07.929 [2024-07-15 15:17:45.812577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.929 [2024-07-15 15:17:45.812588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.929 [2024-07-15 15:17:45.938929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.929 [2024-07-15 15:17:45.938982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:07.929 [2024-07-15 15:17:45.939011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.929 [2024-07-15 15:17:45.939021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.188 [2024-07-15 15:17:46.047861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.188 [2024-07-15 15:17:46.047911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:08.188 [2024-07-15 15:17:46.047922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.188 [2024-07-15 15:17:46.047934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.188 [2024-07-15 15:17:46.048016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.188 [2024-07-15 15:17:46.048042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:08.188 [2024-07-15 15:17:46.048050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.188 [2024-07-15 15:17:46.048057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.188 [2024-07-15 15:17:46.048084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.189 [2024-07-15 15:17:46.048092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:08.189 [2024-07-15 15:17:46.048099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.189 [2024-07-15 15:17:46.048106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.189 [2024-07-15 15:17:46.048214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.189 [2024-07-15 15:17:46.048225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:08.189 [2024-07-15 15:17:46.048233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.189 [2024-07-15 15:17:46.048240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.189 [2024-07-15 15:17:46.048272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.189 [2024-07-15 15:17:46.048280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:08.189 [2024-07-15 15:17:46.048288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.189 [2024-07-15 15:17:46.048295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.189 [2024-07-15 15:17:46.048334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.189 [2024-07-15 15:17:46.048343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:08.189 [2024-07-15 15:17:46.048350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.189 [2024-07-15 15:17:46.048356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.189 [2024-07-15 15:17:46.048397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.189 [2024-07-15 15:17:46.048405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:08.189 [2024-07-15 15:17:46.048412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.189 [2024-07-15 15:17:46.048419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.189 [2024-07-15 15:17:46.048602] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 541.661 ms, result 0 00:22:09.570 00:22:09.570 00:22:09.570 15:17:47 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:22:09.570 15:17:47 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:09.829 15:17:47 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:09.829 [2024-07-15 15:17:47.831844] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:22:09.829 [2024-07-15 15:17:47.832048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82058 ] 00:22:10.089 [2024-07-15 15:17:47.992606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.412 [2024-07-15 15:17:48.220480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.671 [2024-07-15 15:17:48.615074] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:10.671 [2024-07-15 15:17:48.615218] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:10.671 [2024-07-15 15:17:48.772451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.671 [2024-07-15 15:17:48.772597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:10.671 [2024-07-15 15:17:48.772628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:10.671 [2024-07-15 15:17:48.772649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.671 [2024-07-15 15:17:48.775487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.671 [2024-07-15 15:17:48.775559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:10.671 [2024-07-15 15:17:48.775603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.814 ms 00:22:10.671 [2024-07-15 15:17:48.775623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.671 [2024-07-15 15:17:48.775727] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:10.671 [2024-07-15 15:17:48.776953] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:10.671 [2024-07-15 15:17:48.777036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.671 [2024-07-15 15:17:48.777061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:10.671 [2024-07-15 15:17:48.777094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:22:10.671 [2024-07-15 15:17:48.777129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.671 [2024-07-15 15:17:48.778659] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:10.932 [2024-07-15 15:17:48.798530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.932 [2024-07-15 15:17:48.798611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:10.932 [2024-07-15 15:17:48.798653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.894 ms 00:22:10.932 [2024-07-15 15:17:48.798677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.932 [2024-07-15 15:17:48.798836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.932 [2024-07-15 15:17:48.798893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:10.932 [2024-07-15 15:17:48.798926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:10.932 [2024-07-15 15:17:48.798956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.932 [2024-07-15 15:17:48.805813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.932 [2024-07-15 15:17:48.805877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:10.932 [2024-07-15 15:17:48.805905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.795 ms 00:22:10.932 [2024-07-15 15:17:48.805948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.932 [2024-07-15 15:17:48.806067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.932 [2024-07-15 15:17:48.806108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:10.932 [2024-07-15 15:17:48.806137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:10.932 [2024-07-15 15:17:48.806163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.932 [2024-07-15 15:17:48.806232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.932 [2024-07-15 15:17:48.806265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:10.932 [2024-07-15 15:17:48.806293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:10.932 [2024-07-15 15:17:48.806323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.932 [2024-07-15 15:17:48.806376] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:10.932 [2024-07-15 15:17:48.812073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.932 [2024-07-15 15:17:48.812134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:10.932 [2024-07-15 15:17:48.812166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.715 ms 00:22:10.932 [2024-07-15 15:17:48.812194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.932 [2024-07-15 15:17:48.812280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.932 [2024-07-15 15:17:48.812317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:10.932 [2024-07-15 15:17:48.812345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:10.932 [2024-07-15 15:17:48.812372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.932 [2024-07-15 15:17:48.812420] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:10.932 [2024-07-15 15:17:48.812465] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:10.932 [2024-07-15 15:17:48.812534] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:10.932 [2024-07-15 15:17:48.812577] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:10.932 [2024-07-15 15:17:48.812695] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:10.932 [2024-07-15 15:17:48.812737] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:10.932 [2024-07-15 15:17:48.812789] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:10.932 [2024-07-15 15:17:48.812831] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:10.932 [2024-07-15 15:17:48.812880] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:10.932 [2024-07-15 15:17:48.812927] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:10.932 [2024-07-15 15:17:48.812959] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:10.932 [2024-07-15 15:17:48.812986] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:10.932 [2024-07-15 15:17:48.813015] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:10.932 [2024-07-15 15:17:48.813064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.932 [2024-07-15 15:17:48.813091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:10.932 [2024-07-15 15:17:48.813121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:22:10.932 [2024-07-15 15:17:48.813151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.932 [2024-07-15 15:17:48.813248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.932 [2024-07-15 15:17:48.813279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:10.932 [2024-07-15 15:17:48.813307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:10.932 [2024-07-15 15:17:48.813320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.932 [2024-07-15 15:17:48.813404] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:10.932 [2024-07-15 15:17:48.813415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:10.932 [2024-07-15 15:17:48.813423] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:10.932 [2024-07-15 15:17:48.813431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.932 [2024-07-15 15:17:48.813438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:10.932 [2024-07-15 15:17:48.813444] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:10.932 [2024-07-15 15:17:48.813451] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:10.932 [2024-07-15 15:17:48.813458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:10.932 [2024-07-15 15:17:48.813464] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:10.932 [2024-07-15 15:17:48.813471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:10.932 [2024-07-15 15:17:48.813478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:10.932 [2024-07-15 15:17:48.813485] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:10.932 [2024-07-15 15:17:48.813491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:10.932 [2024-07-15 15:17:48.813498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:10.932 [2024-07-15 15:17:48.813504] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:10.932 [2024-07-15 15:17:48.813511] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.932 [2024-07-15 15:17:48.813518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:10.932 [2024-07-15 15:17:48.813524] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:10.932 [2024-07-15 15:17:48.813543] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.932 [2024-07-15 15:17:48.813550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:10.932 [2024-07-15 15:17:48.813557] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:10.932 [2024-07-15 15:17:48.813563] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.932 [2024-07-15 15:17:48.813571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:10.932 [2024-07-15 15:17:48.813578] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:10.932 [2024-07-15 15:17:48.813584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.932 [2024-07-15 15:17:48.813591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:10.932 [2024-07-15 15:17:48.813597] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:10.932 [2024-07-15 15:17:48.813604] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.933 [2024-07-15 15:17:48.813610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:10.933 [2024-07-15 15:17:48.813617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:10.933 [2024-07-15 15:17:48.813624] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.933 [2024-07-15 15:17:48.813630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:10.933 [2024-07-15 15:17:48.813637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:10.933 [2024-07-15 15:17:48.813644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:10.933 [2024-07-15 15:17:48.813650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:10.933 [2024-07-15 15:17:48.813657] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:10.933 [2024-07-15 15:17:48.813663] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:10.933 [2024-07-15 15:17:48.813670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:10.933 [2024-07-15 15:17:48.813676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:10.933 [2024-07-15 15:17:48.813683] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.933 [2024-07-15 15:17:48.813690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:10.933 [2024-07-15 15:17:48.813697] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:10.933 [2024-07-15 15:17:48.813703] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.933 [2024-07-15 15:17:48.813710] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:10.933 [2024-07-15 15:17:48.813718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:10.933 [2024-07-15 15:17:48.813725] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:10.933 [2024-07-15 15:17:48.813731] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.933 [2024-07-15 15:17:48.813739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:10.933 [2024-07-15 15:17:48.813747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:10.933 [2024-07-15 15:17:48.813753] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:10.933 [2024-07-15 15:17:48.813760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:10.933 [2024-07-15 15:17:48.813766] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:10.933 [2024-07-15 15:17:48.813773] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:10.933 [2024-07-15 15:17:48.813781] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:10.933 [2024-07-15 15:17:48.813794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:10.933 [2024-07-15 15:17:48.813802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:10.933 [2024-07-15 15:17:48.813809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:10.933 [2024-07-15 15:17:48.813816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:10.933 [2024-07-15 15:17:48.813823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:10.933 [2024-07-15 15:17:48.813830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:10.933 [2024-07-15 15:17:48.813837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:10.933 [2024-07-15 15:17:48.813844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:10.933 [2024-07-15 15:17:48.813852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:10.933 [2024-07-15 15:17:48.813859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:10.933 [2024-07-15 15:17:48.813866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:10.933 [2024-07-15 15:17:48.813873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:10.933 [2024-07-15 15:17:48.813880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:10.933 [2024-07-15 15:17:48.813887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:10.933 [2024-07-15 15:17:48.813894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:10.933 [2024-07-15 15:17:48.813901] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:10.933 [2024-07-15 15:17:48.813909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:10.933 [2024-07-15 15:17:48.813917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:10.933 [2024-07-15 15:17:48.813924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:10.933 [2024-07-15 15:17:48.813931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:10.933 [2024-07-15 15:17:48.813938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:10.933 [2024-07-15 15:17:48.813946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.933 [2024-07-15 15:17:48.813954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:10.933 [2024-07-15 15:17:48.813962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:22:10.933 [2024-07-15 15:17:48.813969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.933 [2024-07-15 15:17:48.867363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.933 [2024-07-15 15:17:48.867421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:10.933 [2024-07-15 15:17:48.867435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.365 ms 00:22:10.933 [2024-07-15 15:17:48.867443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.933 [2024-07-15 15:17:48.867624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.933 [2024-07-15 15:17:48.867634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:10.933 [2024-07-15 15:17:48.867643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:10.933 [2024-07-15 15:17:48.867654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.933 [2024-07-15 15:17:48.916626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.933 [2024-07-15 15:17:48.916674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:10.933 [2024-07-15 15:17:48.916685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.034 ms 00:22:10.933 [2024-07-15 15:17:48.916693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.933 [2024-07-15 15:17:48.916812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.933 [2024-07-15 15:17:48.916822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:10.933 [2024-07-15 15:17:48.916829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:10.933 [2024-07-15 15:17:48.916836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.933 [2024-07-15 15:17:48.917285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.933 [2024-07-15 15:17:48.917297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:10.933 [2024-07-15 15:17:48.917305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:22:10.933 [2024-07-15 15:17:48.917312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.933 [2024-07-15 15:17:48.917431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.933 [2024-07-15 15:17:48.917446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:10.933 [2024-07-15 15:17:48.917455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:22:10.933 [2024-07-15 15:17:48.917462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.933 [2024-07-15 15:17:48.938626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.933 [2024-07-15 15:17:48.938672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:10.933 [2024-07-15 15:17:48.938683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.178 ms 00:22:10.933 [2024-07-15 15:17:48.938691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.933 [2024-07-15 15:17:48.960229] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:10.933 [2024-07-15 15:17:48.960285] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:10.933 [2024-07-15 15:17:48.960300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.933 [2024-07-15 15:17:48.960324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:10.933 [2024-07-15 15:17:48.960334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.500 ms 00:22:10.933 [2024-07-15 15:17:48.960342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.933 [2024-07-15 15:17:48.991032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.933 [2024-07-15 15:17:48.991078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:10.933 [2024-07-15 15:17:48.991091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.617 ms 00:22:10.933 [2024-07-15 15:17:48.991098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.933 [2024-07-15 15:17:49.010517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.933 [2024-07-15 15:17:49.010560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:10.933 [2024-07-15 15:17:49.010572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.367 ms 00:22:10.933 [2024-07-15 15:17:49.010579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.933 [2024-07-15 15:17:49.030092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.933 [2024-07-15 15:17:49.030131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:10.933 [2024-07-15 15:17:49.030143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.471 ms 00:22:10.933 [2024-07-15 15:17:49.030150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.933 [2024-07-15 15:17:49.030938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.933 [2024-07-15 15:17:49.030984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:10.933 [2024-07-15 15:17:49.031006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:22:10.933 [2024-07-15 15:17:49.031014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.193 [2024-07-15 15:17:49.124020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.193 [2024-07-15 15:17:49.124104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:11.193 [2024-07-15 15:17:49.124119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.156 ms 00:22:11.193 [2024-07-15 15:17:49.124127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.193 [2024-07-15 15:17:49.136639] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:11.193 [2024-07-15 15:17:49.153257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.193 [2024-07-15 15:17:49.153321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:11.193 [2024-07-15 15:17:49.153334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.039 ms 00:22:11.193 [2024-07-15 15:17:49.153357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.193 [2024-07-15 15:17:49.153476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.193 [2024-07-15 15:17:49.153488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:11.193 [2024-07-15 15:17:49.153501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:11.193 [2024-07-15 15:17:49.153509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.193 [2024-07-15 15:17:49.153562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.193 [2024-07-15 15:17:49.153570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:11.193 [2024-07-15 15:17:49.153578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:11.193 [2024-07-15 15:17:49.153586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.193 [2024-07-15 15:17:49.153606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.193 [2024-07-15 15:17:49.153613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:11.193 [2024-07-15 15:17:49.153620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:11.193 [2024-07-15 15:17:49.153629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.193 [2024-07-15 15:17:49.153660] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:11.193 [2024-07-15 15:17:49.153668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.193 [2024-07-15 15:17:49.153675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:11.193 [2024-07-15 15:17:49.153683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:11.194 [2024-07-15 15:17:49.153690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.194 [2024-07-15 15:17:49.193479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.194 [2024-07-15 15:17:49.193593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:11.194 [2024-07-15 15:17:49.193630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.844 ms 00:22:11.194 [2024-07-15 15:17:49.193650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.194 [2024-07-15 15:17:49.193794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.194 [2024-07-15 15:17:49.193836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:11.194 [2024-07-15 15:17:49.193867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:11.194 [2024-07-15 15:17:49.193895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.194 [2024-07-15 15:17:49.194921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:11.194 [2024-07-15 15:17:49.200757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 422.946 ms, result 0 00:22:11.194 [2024-07-15 15:17:49.201673] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:11.194 [2024-07-15 15:17:49.220800] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:11.453  Copying: 4096/4096 [kB] (average 29 MBps)[2024-07-15 15:17:49.362181] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:11.453 [2024-07-15 15:17:49.376878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.453 [2024-07-15 15:17:49.376982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:11.453 [2024-07-15 15:17:49.377021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:11.453 [2024-07-15 15:17:49.377041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.453 [2024-07-15 15:17:49.377076] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:11.453 [2024-07-15 15:17:49.380925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.453 [2024-07-15 15:17:49.381013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:11.453 [2024-07-15 15:17:49.381026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.822 ms 00:22:11.453 [2024-07-15 15:17:49.381033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.453 [2024-07-15 15:17:49.383105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.453 [2024-07-15 15:17:49.383142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:11.453 [2024-07-15 15:17:49.383153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.050 ms 00:22:11.453 [2024-07-15 15:17:49.383161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.453 [2024-07-15 15:17:49.386520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.453 [2024-07-15 15:17:49.386547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:11.453 [2024-07-15 15:17:49.386557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.348 ms 00:22:11.453 [2024-07-15 15:17:49.386586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.453 [2024-07-15 15:17:49.392299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.453 [2024-07-15 15:17:49.392331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:11.453 [2024-07-15 15:17:49.392341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.696 ms 00:22:11.453 [2024-07-15 15:17:49.392349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.453 [2024-07-15 15:17:49.431548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.453 [2024-07-15 15:17:49.431605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:11.453 [2024-07-15 15:17:49.431618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.201 ms 00:22:11.453 [2024-07-15 15:17:49.431626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.453 [2024-07-15 15:17:49.454325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.453 [2024-07-15 15:17:49.454363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:11.453 [2024-07-15 15:17:49.454375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.686 ms 00:22:11.453 [2024-07-15 15:17:49.454383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.453 [2024-07-15 15:17:49.454557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.453 [2024-07-15 15:17:49.454569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:11.453 [2024-07-15 15:17:49.454578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:22:11.453 [2024-07-15 15:17:49.454585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.454 [2024-07-15 15:17:49.494931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.454 [2024-07-15 15:17:49.494981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:11.454 [2024-07-15 15:17:49.495005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.404 ms 00:22:11.454 [2024-07-15 15:17:49.495014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.454 [2024-07-15 15:17:49.534954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.454 [2024-07-15 15:17:49.535082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:11.454 [2024-07-15 15:17:49.535100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.944 ms 00:22:11.454 [2024-07-15 15:17:49.535123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.714 [2024-07-15 15:17:49.576304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.714 [2024-07-15 15:17:49.576357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:11.714 [2024-07-15 15:17:49.576372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.188 ms 00:22:11.714 [2024-07-15 15:17:49.576381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.714 [2024-07-15 15:17:49.617886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.714 [2024-07-15 15:17:49.617947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:11.714 [2024-07-15 15:17:49.617962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.475 ms 00:22:11.714 [2024-07-15 15:17:49.617971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.714 [2024-07-15 15:17:49.618094] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:11.714 [2024-07-15 15:17:49.618117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:11.714 [2024-07-15 15:17:49.618776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.618981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.619002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.619012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.619022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.619031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.619040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.619049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.619058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.619067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:11.715 [2024-07-15 15:17:49.619085] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:11.715 [2024-07-15 15:17:49.619094] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3a743e15-4abe-4742-97e2-4f048b457e12 00:22:11.715 [2024-07-15 15:17:49.619103] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:11.715 [2024-07-15 15:17:49.619112] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:11.715 [2024-07-15 15:17:49.619136] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:11.715 [2024-07-15 15:17:49.619146] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:11.715 [2024-07-15 15:17:49.619154] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:11.715 [2024-07-15 15:17:49.619163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:11.715 [2024-07-15 15:17:49.619172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:11.715 [2024-07-15 15:17:49.619180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:11.715 [2024-07-15 15:17:49.619188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:11.715 [2024-07-15 15:17:49.619198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.715 [2024-07-15 15:17:49.619207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:11.715 [2024-07-15 15:17:49.619216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:22:11.715 [2024-07-15 15:17:49.619229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.715 [2024-07-15 15:17:49.642229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.715 [2024-07-15 15:17:49.642274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:11.715 [2024-07-15 15:17:49.642286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.014 ms 00:22:11.715 [2024-07-15 15:17:49.642311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.715 [2024-07-15 15:17:49.642836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.715 [2024-07-15 15:17:49.642848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:11.715 [2024-07-15 15:17:49.642864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:22:11.715 [2024-07-15 15:17:49.642873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.715 [2024-07-15 15:17:49.694862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.715 [2024-07-15 15:17:49.694921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:11.715 [2024-07-15 15:17:49.694934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.715 [2024-07-15 15:17:49.694942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.715 [2024-07-15 15:17:49.695058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.715 [2024-07-15 15:17:49.695070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:11.715 [2024-07-15 15:17:49.695086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.715 [2024-07-15 15:17:49.695095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.715 [2024-07-15 15:17:49.695152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.715 [2024-07-15 15:17:49.695164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:11.715 [2024-07-15 15:17:49.695172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.715 [2024-07-15 15:17:49.695180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.715 [2024-07-15 15:17:49.695201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.715 [2024-07-15 15:17:49.695211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:11.715 [2024-07-15 15:17:49.695219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.715 [2024-07-15 15:17:49.695232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.715 [2024-07-15 15:17:49.820997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.715 [2024-07-15 15:17:49.821060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:11.715 [2024-07-15 15:17:49.821072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.715 [2024-07-15 15:17:49.821080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.975 [2024-07-15 15:17:49.928765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.975 [2024-07-15 15:17:49.928823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:11.975 [2024-07-15 15:17:49.928841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.975 [2024-07-15 15:17:49.928848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.975 [2024-07-15 15:17:49.928922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.975 [2024-07-15 15:17:49.928930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:11.975 [2024-07-15 15:17:49.928938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.975 [2024-07-15 15:17:49.928945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.975 [2024-07-15 15:17:49.928972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.975 [2024-07-15 15:17:49.928979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:11.975 [2024-07-15 15:17:49.928987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.975 [2024-07-15 15:17:49.929001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.975 [2024-07-15 15:17:49.929102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.975 [2024-07-15 15:17:49.929113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:11.975 [2024-07-15 15:17:49.929120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.975 [2024-07-15 15:17:49.929127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.975 [2024-07-15 15:17:49.929160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.975 [2024-07-15 15:17:49.929169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:11.975 [2024-07-15 15:17:49.929178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.975 [2024-07-15 15:17:49.929185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.975 [2024-07-15 15:17:49.929225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.975 [2024-07-15 15:17:49.929233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:11.975 [2024-07-15 15:17:49.929241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.975 [2024-07-15 15:17:49.929248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.975 [2024-07-15 15:17:49.929292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.975 [2024-07-15 15:17:49.929301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:11.975 [2024-07-15 15:17:49.929309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.975 [2024-07-15 15:17:49.929316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.975 [2024-07-15 15:17:49.929450] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 553.634 ms, result 0 00:22:13.355 00:22:13.355 00:22:13.355 15:17:51 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=82100 00:22:13.355 15:17:51 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:13.355 15:17:51 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 82100 00:22:13.355 15:17:51 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 82100 ']' 00:22:13.355 15:17:51 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.355 15:17:51 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.355 15:17:51 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.355 15:17:51 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.355 15:17:51 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:13.355 [2024-07-15 15:17:51.278082] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:22:13.355 [2024-07-15 15:17:51.278291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82100 ] 00:22:13.355 [2024-07-15 15:17:51.441261] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.614 [2024-07-15 15:17:51.677893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.552 15:17:52 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.552 15:17:52 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:22:14.552 15:17:52 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:14.810 [2024-07-15 15:17:52.818585] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:14.810 [2024-07-15 15:17:52.818657] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:15.070 [2024-07-15 15:17:52.992641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.070 [2024-07-15 15:17:52.992700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:15.070 [2024-07-15 15:17:52.992714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:15.071 [2024-07-15 15:17:52.992724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.071 [2024-07-15 15:17:52.995885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.071 [2024-07-15 15:17:52.995922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:15.071 [2024-07-15 15:17:52.995932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.148 ms 00:22:15.071 [2024-07-15 15:17:52.995957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.071 [2024-07-15 15:17:52.996070] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:15.071 [2024-07-15 15:17:52.997337] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:15.071 [2024-07-15 15:17:52.997366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.071 [2024-07-15 15:17:52.997376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:15.071 [2024-07-15 15:17:52.997386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.308 ms 00:22:15.071 [2024-07-15 15:17:52.997394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.071 [2024-07-15 15:17:52.998886] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:15.071 [2024-07-15 15:17:53.020428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.071 [2024-07-15 15:17:53.020528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:15.071 [2024-07-15 15:17:53.020550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.577 ms 00:22:15.071 [2024-07-15 15:17:53.020559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.071 [2024-07-15 15:17:53.020677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.071 [2024-07-15 15:17:53.020690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:15.071 [2024-07-15 15:17:53.020702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:15.071 [2024-07-15 15:17:53.020710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.071 [2024-07-15 15:17:53.027624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.071 [2024-07-15 15:17:53.027662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:15.071 [2024-07-15 15:17:53.027679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.866 ms 00:22:15.071 [2024-07-15 15:17:53.027686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.071 [2024-07-15 15:17:53.027801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.071 [2024-07-15 15:17:53.027814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:15.071 [2024-07-15 15:17:53.027825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:15.071 [2024-07-15 15:17:53.027833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.071 [2024-07-15 15:17:53.027888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.071 [2024-07-15 15:17:53.027897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:15.071 [2024-07-15 15:17:53.027907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:15.071 [2024-07-15 15:17:53.027914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.071 [2024-07-15 15:17:53.027940] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:15.071 [2024-07-15 15:17:53.033566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.071 [2024-07-15 15:17:53.033598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:15.071 [2024-07-15 15:17:53.033607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.646 ms 00:22:15.071 [2024-07-15 15:17:53.033616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.071 [2024-07-15 15:17:53.033681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.071 [2024-07-15 15:17:53.033695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:15.071 [2024-07-15 15:17:53.033703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:15.071 [2024-07-15 15:17:53.033714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.071 [2024-07-15 15:17:53.033734] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:15.071 [2024-07-15 15:17:53.033756] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:15.071 [2024-07-15 15:17:53.033818] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:15.071 [2024-07-15 15:17:53.033841] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:15.071 [2024-07-15 15:17:53.033937] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:15.071 [2024-07-15 15:17:53.033951] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:15.071 [2024-07-15 15:17:53.033964] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:15.071 [2024-07-15 15:17:53.033977] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:15.071 [2024-07-15 15:17:53.033987] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:15.071 [2024-07-15 15:17:53.033997] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:15.071 [2024-07-15 15:17:53.034022] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:15.071 [2024-07-15 15:17:53.034032] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:15.071 [2024-07-15 15:17:53.034041] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:15.071 [2024-07-15 15:17:53.034054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.071 [2024-07-15 15:17:53.034062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:15.071 [2024-07-15 15:17:53.034072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:22:15.071 [2024-07-15 15:17:53.034080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.071 [2024-07-15 15:17:53.034166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.071 [2024-07-15 15:17:53.034175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:15.071 [2024-07-15 15:17:53.034185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:15.071 [2024-07-15 15:17:53.034194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.071 [2024-07-15 15:17:53.034301] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:15.071 [2024-07-15 15:17:53.034319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:15.071 [2024-07-15 15:17:53.034330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:15.071 [2024-07-15 15:17:53.034338] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.071 [2024-07-15 15:17:53.034348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:15.071 [2024-07-15 15:17:53.034356] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:15.071 [2024-07-15 15:17:53.034367] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:15.071 [2024-07-15 15:17:53.034375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:15.071 [2024-07-15 15:17:53.034387] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:15.071 [2024-07-15 15:17:53.034394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:15.071 [2024-07-15 15:17:53.034403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:15.071 [2024-07-15 15:17:53.034410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:15.071 [2024-07-15 15:17:53.034419] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:15.071 [2024-07-15 15:17:53.034426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:15.071 [2024-07-15 15:17:53.034436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:15.071 [2024-07-15 15:17:53.034443] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.071 [2024-07-15 15:17:53.034452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:15.071 [2024-07-15 15:17:53.034466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:15.071 [2024-07-15 15:17:53.034475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.071 [2024-07-15 15:17:53.034483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:15.071 [2024-07-15 15:17:53.034492] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:15.071 [2024-07-15 15:17:53.034499] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.071 [2024-07-15 15:17:53.034508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:15.071 [2024-07-15 15:17:53.034515] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:15.071 [2024-07-15 15:17:53.034527] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.071 [2024-07-15 15:17:53.034534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:15.071 [2024-07-15 15:17:53.034543] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:15.071 [2024-07-15 15:17:53.034560] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.071 [2024-07-15 15:17:53.034569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:15.071 [2024-07-15 15:17:53.034577] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:15.071 [2024-07-15 15:17:53.034586] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.071 [2024-07-15 15:17:53.034594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:15.071 [2024-07-15 15:17:53.034603] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:15.071 [2024-07-15 15:17:53.034611] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:15.071 [2024-07-15 15:17:53.034621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:15.071 [2024-07-15 15:17:53.034628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:15.071 [2024-07-15 15:17:53.034637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:15.071 [2024-07-15 15:17:53.034645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:15.071 [2024-07-15 15:17:53.034655] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:15.071 [2024-07-15 15:17:53.034662] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.071 [2024-07-15 15:17:53.034673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:15.071 [2024-07-15 15:17:53.034680] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:15.071 [2024-07-15 15:17:53.034689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.071 [2024-07-15 15:17:53.034697] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:15.071 [2024-07-15 15:17:53.034709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:15.071 [2024-07-15 15:17:53.034717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:15.071 [2024-07-15 15:17:53.034727] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.071 [2024-07-15 15:17:53.034735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:15.071 [2024-07-15 15:17:53.034745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:15.071 [2024-07-15 15:17:53.034752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:15.071 [2024-07-15 15:17:53.034761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:15.071 [2024-07-15 15:17:53.034768] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:15.072 [2024-07-15 15:17:53.034777] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:15.072 [2024-07-15 15:17:53.034787] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:15.072 [2024-07-15 15:17:53.034799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:15.072 [2024-07-15 15:17:53.034808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:15.072 [2024-07-15 15:17:53.034822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:15.072 [2024-07-15 15:17:53.034830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:15.072 [2024-07-15 15:17:53.034840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:15.072 [2024-07-15 15:17:53.034848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:15.072 [2024-07-15 15:17:53.034857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:15.072 [2024-07-15 15:17:53.034865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:15.072 [2024-07-15 15:17:53.034875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:15.072 [2024-07-15 15:17:53.034882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:15.072 [2024-07-15 15:17:53.034892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:15.072 [2024-07-15 15:17:53.034900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:15.072 [2024-07-15 15:17:53.034909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:15.072 [2024-07-15 15:17:53.034918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:15.072 [2024-07-15 15:17:53.034928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:15.072 [2024-07-15 15:17:53.034936] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:15.072 [2024-07-15 15:17:53.034946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:15.072 [2024-07-15 15:17:53.034955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:15.072 [2024-07-15 15:17:53.034966] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:15.072 [2024-07-15 15:17:53.034974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:15.072 [2024-07-15 15:17:53.034984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:15.072 [2024-07-15 15:17:53.035002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.072 [2024-07-15 15:17:53.035012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:15.072 [2024-07-15 15:17:53.035020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:22:15.072 [2024-07-15 15:17:53.035030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-07-15 15:17:53.082659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.072 [2024-07-15 15:17:53.082771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:15.072 [2024-07-15 15:17:53.082813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.649 ms 00:22:15.072 [2024-07-15 15:17:53.082846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-07-15 15:17:53.083101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.072 [2024-07-15 15:17:53.083155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:15.072 [2024-07-15 15:17:53.083192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:15.072 [2024-07-15 15:17:53.083229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-07-15 15:17:53.134784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.072 [2024-07-15 15:17:53.134889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:15.072 [2024-07-15 15:17:53.134921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.601 ms 00:22:15.072 [2024-07-15 15:17:53.134943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-07-15 15:17:53.135066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.072 [2024-07-15 15:17:53.135109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:15.072 [2024-07-15 15:17:53.135138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:15.072 [2024-07-15 15:17:53.135167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-07-15 15:17:53.135627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.072 [2024-07-15 15:17:53.135672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:15.072 [2024-07-15 15:17:53.135705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:22:15.072 [2024-07-15 15:17:53.135732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-07-15 15:17:53.135863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.072 [2024-07-15 15:17:53.135903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:15.072 [2024-07-15 15:17:53.135931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:22:15.072 [2024-07-15 15:17:53.135953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-07-15 15:17:53.158642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.072 [2024-07-15 15:17:53.158746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:15.072 [2024-07-15 15:17:53.158777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.645 ms 00:22:15.072 [2024-07-15 15:17:53.158799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.072 [2024-07-15 15:17:53.178196] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:15.072 [2024-07-15 15:17:53.178308] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:15.072 [2024-07-15 15:17:53.178358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.072 [2024-07-15 15:17:53.178393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:15.072 [2024-07-15 15:17:53.178416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.454 ms 00:22:15.072 [2024-07-15 15:17:53.178528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.330 [2024-07-15 15:17:53.210189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.330 [2024-07-15 15:17:53.210351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:15.330 [2024-07-15 15:17:53.210385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.606 ms 00:22:15.330 [2024-07-15 15:17:53.210408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.330 [2024-07-15 15:17:53.232451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.330 [2024-07-15 15:17:53.232599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:15.330 [2024-07-15 15:17:53.232657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.891 ms 00:22:15.330 [2024-07-15 15:17:53.232684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.330 [2024-07-15 15:17:53.254417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.330 [2024-07-15 15:17:53.254586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:15.330 [2024-07-15 15:17:53.254622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.619 ms 00:22:15.330 [2024-07-15 15:17:53.254645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.330 [2024-07-15 15:17:53.255633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.330 [2024-07-15 15:17:53.255700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:15.330 [2024-07-15 15:17:53.255733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:22:15.330 [2024-07-15 15:17:53.255767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.330 [2024-07-15 15:17:53.359779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.330 [2024-07-15 15:17:53.359925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:15.330 [2024-07-15 15:17:53.359959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.162 ms 00:22:15.330 [2024-07-15 15:17:53.359982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.330 [2024-07-15 15:17:53.372930] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:15.330 [2024-07-15 15:17:53.390348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.330 [2024-07-15 15:17:53.390480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:15.330 [2024-07-15 15:17:53.390531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.235 ms 00:22:15.330 [2024-07-15 15:17:53.390557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.330 [2024-07-15 15:17:53.390680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.330 [2024-07-15 15:17:53.390717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:15.330 [2024-07-15 15:17:53.390743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:15.330 [2024-07-15 15:17:53.390786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.330 [2024-07-15 15:17:53.390881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.331 [2024-07-15 15:17:53.390923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:15.331 [2024-07-15 15:17:53.390958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:22:15.331 [2024-07-15 15:17:53.390987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.331 [2024-07-15 15:17:53.391074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.331 [2024-07-15 15:17:53.391110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:15.331 [2024-07-15 15:17:53.391142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:15.331 [2024-07-15 15:17:53.391171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.331 [2024-07-15 15:17:53.391228] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:15.331 [2024-07-15 15:17:53.391262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.331 [2024-07-15 15:17:53.391296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:15.331 [2024-07-15 15:17:53.391327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:15.331 [2024-07-15 15:17:53.391359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.331 [2024-07-15 15:17:53.429556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.331 [2024-07-15 15:17:53.429660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:15.331 [2024-07-15 15:17:53.429690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.218 ms 00:22:15.331 [2024-07-15 15:17:53.429711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.331 [2024-07-15 15:17:53.429849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.331 [2024-07-15 15:17:53.429890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:15.331 [2024-07-15 15:17:53.429917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:15.331 [2024-07-15 15:17:53.429946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.331 [2024-07-15 15:17:53.430953] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:15.331 [2024-07-15 15:17:53.436477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 438.817 ms, result 0 00:22:15.331 [2024-07-15 15:17:53.437536] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:15.589 Some configs were skipped because the RPC state that can call them passed over. 00:22:15.589 15:17:53 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:15.589 [2024-07-15 15:17:53.670293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.589 [2024-07-15 15:17:53.670437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:15.589 [2024-07-15 15:17:53.670469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:22:15.589 [2024-07-15 15:17:53.670480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.589 [2024-07-15 15:17:53.670523] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.792 ms, result 0 00:22:15.589 true 00:22:15.589 15:17:53 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:15.848 [2024-07-15 15:17:53.865578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.848 [2024-07-15 15:17:53.865639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:15.848 [2024-07-15 15:17:53.865653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:22:15.848 [2024-07-15 15:17:53.865663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.848 [2024-07-15 15:17:53.865699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.169 ms, result 0 00:22:15.848 true 00:22:15.848 15:17:53 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 82100 00:22:15.848 15:17:53 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 82100 ']' 00:22:15.848 15:17:53 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 82100 00:22:15.848 15:17:53 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:22:15.848 15:17:53 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.848 15:17:53 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82100 00:22:15.848 killing process with pid 82100 00:22:15.848 15:17:53 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:15.848 15:17:53 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:15.848 15:17:53 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82100' 00:22:15.848 15:17:53 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 82100 00:22:15.848 15:17:53 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 82100 00:22:17.226 [2024-07-15 15:17:55.110471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.226 [2024-07-15 15:17:55.110554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:17.226 [2024-07-15 15:17:55.110573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:17.226 [2024-07-15 15:17:55.110582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.226 [2024-07-15 15:17:55.110611] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:17.226 [2024-07-15 15:17:55.115147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.226 [2024-07-15 15:17:55.115197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:17.226 [2024-07-15 15:17:55.115211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.529 ms 00:22:17.226 [2024-07-15 15:17:55.115224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.226 [2024-07-15 15:17:55.115545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.226 [2024-07-15 15:17:55.115567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:17.226 [2024-07-15 15:17:55.115578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:22:17.226 [2024-07-15 15:17:55.115588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.226 [2024-07-15 15:17:55.119401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.226 [2024-07-15 15:17:55.119445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:17.226 [2024-07-15 15:17:55.119460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.801 ms 00:22:17.226 [2024-07-15 15:17:55.119471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.226 [2024-07-15 15:17:55.126258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.226 [2024-07-15 15:17:55.126301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:17.226 [2024-07-15 15:17:55.126313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.762 ms 00:22:17.226 [2024-07-15 15:17:55.126326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.226 [2024-07-15 15:17:55.145356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.226 [2024-07-15 15:17:55.145415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:17.226 [2024-07-15 15:17:55.145429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.985 ms 00:22:17.226 [2024-07-15 15:17:55.145443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.226 [2024-07-15 15:17:55.158838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.226 [2024-07-15 15:17:55.158897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:17.226 [2024-07-15 15:17:55.158915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.358 ms 00:22:17.226 [2024-07-15 15:17:55.158926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.226 [2024-07-15 15:17:55.159128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.226 [2024-07-15 15:17:55.159144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:17.226 [2024-07-15 15:17:55.159154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:22:17.226 [2024-07-15 15:17:55.159180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.226 [2024-07-15 15:17:55.178706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.226 [2024-07-15 15:17:55.178761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:17.226 [2024-07-15 15:17:55.178774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.542 ms 00:22:17.226 [2024-07-15 15:17:55.178784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.226 [2024-07-15 15:17:55.197419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.226 [2024-07-15 15:17:55.197470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:17.226 [2024-07-15 15:17:55.197483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.615 ms 00:22:17.226 [2024-07-15 15:17:55.197499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.226 [2024-07-15 15:17:55.215772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.226 [2024-07-15 15:17:55.215826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:17.226 [2024-07-15 15:17:55.215839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.240 ms 00:22:17.226 [2024-07-15 15:17:55.215849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.226 [2024-07-15 15:17:55.233429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.226 [2024-07-15 15:17:55.233478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:17.226 [2024-07-15 15:17:55.233490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.523 ms 00:22:17.226 [2024-07-15 15:17:55.233501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.226 [2024-07-15 15:17:55.233634] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:17.226 [2024-07-15 15:17:55.233656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.233992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.234018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.234029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.234038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.234049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.234057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.234069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.234078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.234088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.234097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.234108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.234116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.234127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:17.226 [2024-07-15 15:17:55.234135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:17.227 [2024-07-15 15:17:55.234708] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:17.227 [2024-07-15 15:17:55.234716] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3a743e15-4abe-4742-97e2-4f048b457e12 00:22:17.227 [2024-07-15 15:17:55.234733] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:17.227 [2024-07-15 15:17:55.234741] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:17.227 [2024-07-15 15:17:55.234753] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:17.227 [2024-07-15 15:17:55.234762] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:17.227 [2024-07-15 15:17:55.234772] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:17.227 [2024-07-15 15:17:55.234781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:17.227 [2024-07-15 15:17:55.234791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:17.227 [2024-07-15 15:17:55.234799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:17.227 [2024-07-15 15:17:55.234830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:17.227 [2024-07-15 15:17:55.234840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.227 [2024-07-15 15:17:55.234852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:17.227 [2024-07-15 15:17:55.234862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:22:17.227 [2024-07-15 15:17:55.234873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-07-15 15:17:55.259355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.227 [2024-07-15 15:17:55.259419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:17.227 [2024-07-15 15:17:55.259449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.496 ms 00:22:17.227 [2024-07-15 15:17:55.259464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-07-15 15:17:55.260204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.227 [2024-07-15 15:17:55.260228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:17.227 [2024-07-15 15:17:55.260241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:22:17.227 [2024-07-15 15:17:55.260254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-07-15 15:17:55.329767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.227 [2024-07-15 15:17:55.329819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:17.227 [2024-07-15 15:17:55.329848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.227 [2024-07-15 15:17:55.329858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-07-15 15:17:55.329964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.227 [2024-07-15 15:17:55.329975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:17.227 [2024-07-15 15:17:55.329983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.227 [2024-07-15 15:17:55.329995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-07-15 15:17:55.330064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.227 [2024-07-15 15:17:55.330080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:17.227 [2024-07-15 15:17:55.330088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.227 [2024-07-15 15:17:55.330099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-07-15 15:17:55.330119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.227 [2024-07-15 15:17:55.330128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:17.227 [2024-07-15 15:17:55.330135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.227 [2024-07-15 15:17:55.330144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.486 [2024-07-15 15:17:55.456281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.486 [2024-07-15 15:17:55.456335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:17.486 [2024-07-15 15:17:55.456347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.486 [2024-07-15 15:17:55.456357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.486 [2024-07-15 15:17:55.560300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.486 [2024-07-15 15:17:55.560354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:17.486 [2024-07-15 15:17:55.560366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.486 [2024-07-15 15:17:55.560375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.486 [2024-07-15 15:17:55.560460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.486 [2024-07-15 15:17:55.560471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:17.486 [2024-07-15 15:17:55.560478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.486 [2024-07-15 15:17:55.560489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.486 [2024-07-15 15:17:55.560515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.486 [2024-07-15 15:17:55.560525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:17.486 [2024-07-15 15:17:55.560532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.486 [2024-07-15 15:17:55.560540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.486 [2024-07-15 15:17:55.560639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.486 [2024-07-15 15:17:55.560651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:17.486 [2024-07-15 15:17:55.560659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.486 [2024-07-15 15:17:55.560667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.486 [2024-07-15 15:17:55.560700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.486 [2024-07-15 15:17:55.560711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:17.486 [2024-07-15 15:17:55.560718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.486 [2024-07-15 15:17:55.560727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.486 [2024-07-15 15:17:55.560764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.486 [2024-07-15 15:17:55.560777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:17.486 [2024-07-15 15:17:55.560784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.486 [2024-07-15 15:17:55.560794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.486 [2024-07-15 15:17:55.560835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.486 [2024-07-15 15:17:55.560846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:17.486 [2024-07-15 15:17:55.560852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.486 [2024-07-15 15:17:55.560861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.486 [2024-07-15 15:17:55.561010] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 451.390 ms, result 0 00:22:18.862 15:17:56 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:18.862 [2024-07-15 15:17:56.731769] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:22:18.862 [2024-07-15 15:17:56.731877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82170 ] 00:22:18.862 [2024-07-15 15:17:56.893594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.119 [2024-07-15 15:17:57.137960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.704 [2024-07-15 15:17:57.546316] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:19.704 [2024-07-15 15:17:57.546380] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:19.704 [2024-07-15 15:17:57.703892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.705 [2024-07-15 15:17:57.703960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:19.705 [2024-07-15 15:17:57.703973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:19.705 [2024-07-15 15:17:57.703997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.705 [2024-07-15 15:17:57.706771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.705 [2024-07-15 15:17:57.706806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:19.705 [2024-07-15 15:17:57.706816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.751 ms 00:22:19.705 [2024-07-15 15:17:57.706823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.705 [2024-07-15 15:17:57.706916] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:19.705 [2024-07-15 15:17:57.708163] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:19.705 [2024-07-15 15:17:57.708237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.705 [2024-07-15 15:17:57.708275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:19.705 [2024-07-15 15:17:57.708296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.331 ms 00:22:19.705 [2024-07-15 15:17:57.708343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.705 [2024-07-15 15:17:57.709851] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:19.705 [2024-07-15 15:17:57.730249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.705 [2024-07-15 15:17:57.730317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:19.705 [2024-07-15 15:17:57.730337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.438 ms 00:22:19.705 [2024-07-15 15:17:57.730360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.705 [2024-07-15 15:17:57.730448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.705 [2024-07-15 15:17:57.730468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:19.705 [2024-07-15 15:17:57.730477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:19.705 [2024-07-15 15:17:57.730484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.705 [2024-07-15 15:17:57.737219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.705 [2024-07-15 15:17:57.737247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:19.705 [2024-07-15 15:17:57.737257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.688 ms 00:22:19.705 [2024-07-15 15:17:57.737264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.705 [2024-07-15 15:17:57.737351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.705 [2024-07-15 15:17:57.737365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:19.705 [2024-07-15 15:17:57.737373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:19.705 [2024-07-15 15:17:57.737380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.705 [2024-07-15 15:17:57.737431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.705 [2024-07-15 15:17:57.737441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:19.705 [2024-07-15 15:17:57.737449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:19.705 [2024-07-15 15:17:57.737458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.705 [2024-07-15 15:17:57.737481] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:19.705 [2024-07-15 15:17:57.743538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.705 [2024-07-15 15:17:57.743571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:19.705 [2024-07-15 15:17:57.743582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.076 ms 00:22:19.705 [2024-07-15 15:17:57.743591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.705 [2024-07-15 15:17:57.743668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.705 [2024-07-15 15:17:57.743678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:19.705 [2024-07-15 15:17:57.743687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:19.705 [2024-07-15 15:17:57.743694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.705 [2024-07-15 15:17:57.743712] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:19.705 [2024-07-15 15:17:57.743744] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:19.705 [2024-07-15 15:17:57.743779] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:19.705 [2024-07-15 15:17:57.743793] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:19.705 [2024-07-15 15:17:57.743873] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:19.705 [2024-07-15 15:17:57.743883] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:19.705 [2024-07-15 15:17:57.743892] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:19.705 [2024-07-15 15:17:57.743902] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:19.705 [2024-07-15 15:17:57.743911] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:19.705 [2024-07-15 15:17:57.743918] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:19.705 [2024-07-15 15:17:57.743928] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:19.705 [2024-07-15 15:17:57.743935] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:19.705 [2024-07-15 15:17:57.743942] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:19.705 [2024-07-15 15:17:57.743949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.705 [2024-07-15 15:17:57.743956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:19.705 [2024-07-15 15:17:57.743963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:22:19.705 [2024-07-15 15:17:57.743970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.705 [2024-07-15 15:17:57.744103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.705 [2024-07-15 15:17:57.744129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:19.705 [2024-07-15 15:17:57.744148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:22:19.705 [2024-07-15 15:17:57.744171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.705 [2024-07-15 15:17:57.744282] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:19.705 [2024-07-15 15:17:57.744316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:19.705 [2024-07-15 15:17:57.744346] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:19.705 [2024-07-15 15:17:57.744374] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.705 [2024-07-15 15:17:57.744401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:19.705 [2024-07-15 15:17:57.744426] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:19.705 [2024-07-15 15:17:57.744459] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:19.705 [2024-07-15 15:17:57.744479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:19.705 [2024-07-15 15:17:57.744509] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:19.705 [2024-07-15 15:17:57.744533] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:19.705 [2024-07-15 15:17:57.744560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:19.705 [2024-07-15 15:17:57.744587] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:19.705 [2024-07-15 15:17:57.744613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:19.705 [2024-07-15 15:17:57.744640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:19.705 [2024-07-15 15:17:57.744666] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:19.705 [2024-07-15 15:17:57.744692] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.705 [2024-07-15 15:17:57.744719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:19.705 [2024-07-15 15:17:57.744745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:19.705 [2024-07-15 15:17:57.744787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.705 [2024-07-15 15:17:57.744815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:19.705 [2024-07-15 15:17:57.744842] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:19.705 [2024-07-15 15:17:57.744868] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:19.705 [2024-07-15 15:17:57.744897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:19.705 [2024-07-15 15:17:57.744922] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:19.705 [2024-07-15 15:17:57.744947] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:19.705 [2024-07-15 15:17:57.744956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:19.705 [2024-07-15 15:17:57.744963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:19.705 [2024-07-15 15:17:57.744971] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:19.705 [2024-07-15 15:17:57.744978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:19.705 [2024-07-15 15:17:57.744984] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:19.705 [2024-07-15 15:17:57.744998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:19.705 [2024-07-15 15:17:57.745006] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:19.705 [2024-07-15 15:17:57.745013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:19.705 [2024-07-15 15:17:57.745019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:19.705 [2024-07-15 15:17:57.745026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:19.705 [2024-07-15 15:17:57.745032] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:19.705 [2024-07-15 15:17:57.745040] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:19.705 [2024-07-15 15:17:57.745047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:19.705 [2024-07-15 15:17:57.745054] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:19.705 [2024-07-15 15:17:57.745060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.705 [2024-07-15 15:17:57.745066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:19.705 [2024-07-15 15:17:57.745073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:19.705 [2024-07-15 15:17:57.745080] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.705 [2024-07-15 15:17:57.745086] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:19.705 [2024-07-15 15:17:57.745094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:19.705 [2024-07-15 15:17:57.745101] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:19.705 [2024-07-15 15:17:57.745108] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.705 [2024-07-15 15:17:57.745115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:19.705 [2024-07-15 15:17:57.745122] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:19.705 [2024-07-15 15:17:57.745129] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:19.705 [2024-07-15 15:17:57.745135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:19.705 [2024-07-15 15:17:57.745141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:19.705 [2024-07-15 15:17:57.745148] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:19.705 [2024-07-15 15:17:57.745157] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:19.705 [2024-07-15 15:17:57.745169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:19.705 [2024-07-15 15:17:57.745178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:19.705 [2024-07-15 15:17:57.745186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:19.705 [2024-07-15 15:17:57.745193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:19.705 [2024-07-15 15:17:57.745200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:19.705 [2024-07-15 15:17:57.745208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:19.705 [2024-07-15 15:17:57.745215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:19.705 [2024-07-15 15:17:57.745223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:19.706 [2024-07-15 15:17:57.745231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:19.706 [2024-07-15 15:17:57.745237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:19.706 [2024-07-15 15:17:57.745244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:19.706 [2024-07-15 15:17:57.745251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:19.706 [2024-07-15 15:17:57.745258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:19.706 [2024-07-15 15:17:57.745265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:19.706 [2024-07-15 15:17:57.745272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:19.706 [2024-07-15 15:17:57.745280] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:19.706 [2024-07-15 15:17:57.745287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:19.706 [2024-07-15 15:17:57.745295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:19.706 [2024-07-15 15:17:57.745302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:19.706 [2024-07-15 15:17:57.745310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:19.706 [2024-07-15 15:17:57.745317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:19.706 [2024-07-15 15:17:57.745325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.706 [2024-07-15 15:17:57.745333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:19.706 [2024-07-15 15:17:57.745342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:22:19.706 [2024-07-15 15:17:57.745349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.706 [2024-07-15 15:17:57.806663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.706 [2024-07-15 15:17:57.806719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:19.706 [2024-07-15 15:17:57.806736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.370 ms 00:22:19.706 [2024-07-15 15:17:57.806745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.706 [2024-07-15 15:17:57.806927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.706 [2024-07-15 15:17:57.806940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:19.706 [2024-07-15 15:17:57.806950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:19.706 [2024-07-15 15:17:57.806962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.965 [2024-07-15 15:17:57.864689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.965 [2024-07-15 15:17:57.864737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:19.965 [2024-07-15 15:17:57.864750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.812 ms 00:22:19.965 [2024-07-15 15:17:57.864759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.965 [2024-07-15 15:17:57.864871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.965 [2024-07-15 15:17:57.864883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:19.965 [2024-07-15 15:17:57.864893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:19.965 [2024-07-15 15:17:57.864901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.965 [2024-07-15 15:17:57.865356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.965 [2024-07-15 15:17:57.865373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:19.965 [2024-07-15 15:17:57.865383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:22:19.965 [2024-07-15 15:17:57.865390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.965 [2024-07-15 15:17:57.865535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.965 [2024-07-15 15:17:57.865553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:19.965 [2024-07-15 15:17:57.865563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:22:19.965 [2024-07-15 15:17:57.865571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.965 [2024-07-15 15:17:57.888568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.965 [2024-07-15 15:17:57.888610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:19.965 [2024-07-15 15:17:57.888622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.013 ms 00:22:19.965 [2024-07-15 15:17:57.888629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.965 [2024-07-15 15:17:57.910119] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:19.965 [2024-07-15 15:17:57.910153] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:19.965 [2024-07-15 15:17:57.910166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.965 [2024-07-15 15:17:57.910174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:19.965 [2024-07-15 15:17:57.910184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.438 ms 00:22:19.965 [2024-07-15 15:17:57.910190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.965 [2024-07-15 15:17:57.941812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.965 [2024-07-15 15:17:57.941850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:19.965 [2024-07-15 15:17:57.941862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.608 ms 00:22:19.965 [2024-07-15 15:17:57.941869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.965 [2024-07-15 15:17:57.961433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.965 [2024-07-15 15:17:57.961466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:19.965 [2024-07-15 15:17:57.961477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.523 ms 00:22:19.965 [2024-07-15 15:17:57.961484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.965 [2024-07-15 15:17:57.980815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.965 [2024-07-15 15:17:57.980845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:19.965 [2024-07-15 15:17:57.980855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.286 ms 00:22:19.965 [2024-07-15 15:17:57.980861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.965 [2024-07-15 15:17:57.981757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.965 [2024-07-15 15:17:57.981787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:19.965 [2024-07-15 15:17:57.981798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:22:19.965 [2024-07-15 15:17:57.981805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.223 [2024-07-15 15:17:58.076557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.223 [2024-07-15 15:17:58.076613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:20.223 [2024-07-15 15:17:58.076627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.905 ms 00:22:20.223 [2024-07-15 15:17:58.076635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.223 [2024-07-15 15:17:58.090264] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:20.223 [2024-07-15 15:17:58.107418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.223 [2024-07-15 15:17:58.107483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:20.223 [2024-07-15 15:17:58.107498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.700 ms 00:22:20.223 [2024-07-15 15:17:58.107508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.223 [2024-07-15 15:17:58.107636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.223 [2024-07-15 15:17:58.107650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:20.223 [2024-07-15 15:17:58.107663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:20.223 [2024-07-15 15:17:58.107672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.223 [2024-07-15 15:17:58.107740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.223 [2024-07-15 15:17:58.107749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:20.223 [2024-07-15 15:17:58.107757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:20.223 [2024-07-15 15:17:58.107765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.223 [2024-07-15 15:17:58.107785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.223 [2024-07-15 15:17:58.107793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:20.223 [2024-07-15 15:17:58.107801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:20.224 [2024-07-15 15:17:58.107811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.224 [2024-07-15 15:17:58.107846] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:20.224 [2024-07-15 15:17:58.107854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.224 [2024-07-15 15:17:58.107862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:20.224 [2024-07-15 15:17:58.107870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:20.224 [2024-07-15 15:17:58.107877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.224 [2024-07-15 15:17:58.148540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.224 [2024-07-15 15:17:58.148594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:20.224 [2024-07-15 15:17:58.148614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.720 ms 00:22:20.224 [2024-07-15 15:17:58.148623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.224 [2024-07-15 15:17:58.148744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.224 [2024-07-15 15:17:58.148755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:20.224 [2024-07-15 15:17:58.148763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:20.224 [2024-07-15 15:17:58.148770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.224 [2024-07-15 15:17:58.149763] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:20.224 [2024-07-15 15:17:58.155224] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 446.417 ms, result 0 00:22:20.224 [2024-07-15 15:17:58.156025] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:20.224 [2024-07-15 15:17:58.175772] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:28.413  Copying: 36/256 [MB] (36 MBps) Copying: 68/256 [MB] (31 MBps) Copying: 100/256 [MB] (31 MBps) Copying: 131/256 [MB] (31 MBps) Copying: 163/256 [MB] (32 MBps) Copying: 196/256 [MB] (33 MBps) Copying: 229/256 [MB] (32 MBps) Copying: 256/256 [MB] (average 32 MBps)[2024-07-15 15:18:06.473313] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:28.413 [2024-07-15 15:18:06.497978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.413 [2024-07-15 15:18:06.498056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:28.413 [2024-07-15 15:18:06.498073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:28.413 [2024-07-15 15:18:06.498082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.413 [2024-07-15 15:18:06.498114] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:28.413 [2024-07-15 15:18:06.502136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.413 [2024-07-15 15:18:06.502172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:28.413 [2024-07-15 15:18:06.502183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.014 ms 00:22:28.413 [2024-07-15 15:18:06.502190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.413 [2024-07-15 15:18:06.502480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.413 [2024-07-15 15:18:06.502491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:28.413 [2024-07-15 15:18:06.502501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:22:28.413 [2024-07-15 15:18:06.502509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.413 [2024-07-15 15:18:06.506449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.413 [2024-07-15 15:18:06.506493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:28.413 [2024-07-15 15:18:06.506505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.929 ms 00:22:28.413 [2024-07-15 15:18:06.506520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.413 [2024-07-15 15:18:06.512988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.413 [2024-07-15 15:18:06.513023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:28.414 [2024-07-15 15:18:06.513049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.451 ms 00:22:28.414 [2024-07-15 15:18:06.513058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.675 [2024-07-15 15:18:06.560734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.675 [2024-07-15 15:18:06.560796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:28.675 [2024-07-15 15:18:06.560811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.679 ms 00:22:28.675 [2024-07-15 15:18:06.560818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.675 [2024-07-15 15:18:06.584602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.675 [2024-07-15 15:18:06.584663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:28.675 [2024-07-15 15:18:06.584676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.730 ms 00:22:28.675 [2024-07-15 15:18:06.584684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.675 [2024-07-15 15:18:06.584859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.675 [2024-07-15 15:18:06.584871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:28.675 [2024-07-15 15:18:06.584880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:22:28.675 [2024-07-15 15:18:06.584887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.675 [2024-07-15 15:18:06.624201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.675 [2024-07-15 15:18:06.624246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:28.675 [2024-07-15 15:18:06.624258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.372 ms 00:22:28.675 [2024-07-15 15:18:06.624265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.675 [2024-07-15 15:18:06.661983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.675 [2024-07-15 15:18:06.662030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:28.675 [2024-07-15 15:18:06.662041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.734 ms 00:22:28.675 [2024-07-15 15:18:06.662048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.675 [2024-07-15 15:18:06.700408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.675 [2024-07-15 15:18:06.700453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:28.675 [2024-07-15 15:18:06.700466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.384 ms 00:22:28.675 [2024-07-15 15:18:06.700473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.675 [2024-07-15 15:18:06.739374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.675 [2024-07-15 15:18:06.739412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:28.675 [2024-07-15 15:18:06.739423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.891 ms 00:22:28.675 [2024-07-15 15:18:06.739430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.675 [2024-07-15 15:18:06.739480] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:28.675 [2024-07-15 15:18:06.739495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:28.675 [2024-07-15 15:18:06.739879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.739997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:28.676 [2024-07-15 15:18:06.740283] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:28.676 [2024-07-15 15:18:06.740291] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3a743e15-4abe-4742-97e2-4f048b457e12 00:22:28.676 [2024-07-15 15:18:06.740298] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:28.676 [2024-07-15 15:18:06.740305] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:28.676 [2024-07-15 15:18:06.740325] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:28.676 [2024-07-15 15:18:06.740332] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:28.676 [2024-07-15 15:18:06.740339] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:28.676 [2024-07-15 15:18:06.740347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:28.676 [2024-07-15 15:18:06.740354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:28.676 [2024-07-15 15:18:06.740360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:28.676 [2024-07-15 15:18:06.740366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:28.676 [2024-07-15 15:18:06.740374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.676 [2024-07-15 15:18:06.740381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:28.676 [2024-07-15 15:18:06.740388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:22:28.676 [2024-07-15 15:18:06.740398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.676 [2024-07-15 15:18:06.760887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.676 [2024-07-15 15:18:06.760923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:28.676 [2024-07-15 15:18:06.760932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.507 ms 00:22:28.676 [2024-07-15 15:18:06.760939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.676 [2024-07-15 15:18:06.761467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.676 [2024-07-15 15:18:06.761486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:28.676 [2024-07-15 15:18:06.761500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:22:28.676 [2024-07-15 15:18:06.761507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.935 [2024-07-15 15:18:06.810023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.935 [2024-07-15 15:18:06.810090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:28.935 [2024-07-15 15:18:06.810103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.935 [2024-07-15 15:18:06.810111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.935 [2024-07-15 15:18:06.810197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.935 [2024-07-15 15:18:06.810205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:28.935 [2024-07-15 15:18:06.810219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.935 [2024-07-15 15:18:06.810226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.935 [2024-07-15 15:18:06.810275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.935 [2024-07-15 15:18:06.810285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:28.935 [2024-07-15 15:18:06.810292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.935 [2024-07-15 15:18:06.810298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.935 [2024-07-15 15:18:06.810316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.935 [2024-07-15 15:18:06.810325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:28.935 [2024-07-15 15:18:06.810332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.935 [2024-07-15 15:18:06.810341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.935 [2024-07-15 15:18:06.930390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.936 [2024-07-15 15:18:06.930449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:28.936 [2024-07-15 15:18:06.930461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.936 [2024-07-15 15:18:06.930491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.936 [2024-07-15 15:18:07.035170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.936 [2024-07-15 15:18:07.035231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:28.936 [2024-07-15 15:18:07.035243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.936 [2024-07-15 15:18:07.035256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.936 [2024-07-15 15:18:07.035325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.936 [2024-07-15 15:18:07.035335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:28.936 [2024-07-15 15:18:07.035343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.936 [2024-07-15 15:18:07.035352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.936 [2024-07-15 15:18:07.035379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.936 [2024-07-15 15:18:07.035388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:28.936 [2024-07-15 15:18:07.035396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.936 [2024-07-15 15:18:07.035403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.936 [2024-07-15 15:18:07.035519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.936 [2024-07-15 15:18:07.035530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:28.936 [2024-07-15 15:18:07.035538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.936 [2024-07-15 15:18:07.035545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.936 [2024-07-15 15:18:07.035582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.936 [2024-07-15 15:18:07.035592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:28.936 [2024-07-15 15:18:07.035599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.936 [2024-07-15 15:18:07.035606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.936 [2024-07-15 15:18:07.035662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.936 [2024-07-15 15:18:07.035670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:28.936 [2024-07-15 15:18:07.035678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.936 [2024-07-15 15:18:07.035684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.936 [2024-07-15 15:18:07.035724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.936 [2024-07-15 15:18:07.035733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:28.936 [2024-07-15 15:18:07.035740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.936 [2024-07-15 15:18:07.035748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.936 [2024-07-15 15:18:07.035880] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 538.949 ms, result 0 00:22:30.319 00:22:30.319 00:22:30.319 15:18:08 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:30.888 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:30.888 15:18:08 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:30.888 15:18:08 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:30.888 15:18:08 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:30.888 15:18:08 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:30.888 15:18:08 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:30.888 15:18:08 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:30.888 15:18:08 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 82100 00:22:30.888 15:18:08 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 82100 ']' 00:22:30.888 15:18:08 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 82100 00:22:30.888 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (82100) - No such process 00:22:30.888 Process with pid 82100 is not found 00:22:30.888 15:18:08 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'Process with pid 82100 is not found' 00:22:30.888 00:22:30.888 real 1m8.144s 00:22:30.888 user 1m38.410s 00:22:30.888 sys 0m5.813s 00:22:30.888 ************************************ 00:22:30.888 END TEST ftl_trim 00:22:30.888 ************************************ 00:22:30.888 15:18:08 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:30.888 15:18:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:30.888 15:18:08 ftl -- common/autotest_common.sh@1142 -- # return 0 00:22:30.888 15:18:08 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:30.888 15:18:08 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:30.888 15:18:08 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:30.888 15:18:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:30.888 ************************************ 00:22:30.888 START TEST ftl_restore 00:22:30.888 ************************************ 00:22:30.888 15:18:08 ftl.ftl_restore -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:31.147 * Looking for test storage... 00:22:31.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:31.147 15:18:09 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.9oswOiYNAp 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=82353 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:31.148 15:18:09 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 82353 00:22:31.148 15:18:09 ftl.ftl_restore -- common/autotest_common.sh@829 -- # '[' -z 82353 ']' 00:22:31.148 15:18:09 ftl.ftl_restore -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.148 15:18:09 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:31.148 15:18:09 ftl.ftl_restore -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.148 15:18:09 ftl.ftl_restore -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:31.148 15:18:09 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:31.148 [2024-07-15 15:18:09.162646] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:22:31.148 [2024-07-15 15:18:09.162861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82353 ] 00:22:31.406 [2024-07-15 15:18:09.326427] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.664 [2024-07-15 15:18:09.555387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.603 15:18:10 ftl.ftl_restore -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:32.603 15:18:10 ftl.ftl_restore -- common/autotest_common.sh@862 -- # return 0 00:22:32.603 15:18:10 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:32.603 15:18:10 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:32.603 15:18:10 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:32.603 15:18:10 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:32.603 15:18:10 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:32.603 15:18:10 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:32.863 15:18:10 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:32.863 15:18:10 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:32.863 15:18:10 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:32.863 15:18:10 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:32.863 15:18:10 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:32.863 15:18:10 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:32.863 15:18:10 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:32.863 15:18:10 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:32.863 15:18:10 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:32.863 { 00:22:32.863 "name": "nvme0n1", 00:22:32.863 "aliases": [ 00:22:32.863 "e6c5afb5-54f1-430b-96e3-6c4a7b1e7ebc" 00:22:32.863 ], 00:22:32.863 "product_name": "NVMe disk", 00:22:32.863 "block_size": 4096, 00:22:32.863 "num_blocks": 1310720, 00:22:32.863 "uuid": "e6c5afb5-54f1-430b-96e3-6c4a7b1e7ebc", 00:22:32.863 "assigned_rate_limits": { 00:22:32.863 "rw_ios_per_sec": 0, 00:22:32.863 "rw_mbytes_per_sec": 0, 00:22:32.863 "r_mbytes_per_sec": 0, 00:22:32.863 "w_mbytes_per_sec": 0 00:22:32.863 }, 00:22:32.863 "claimed": true, 00:22:32.863 "claim_type": "read_many_write_one", 00:22:32.863 "zoned": false, 00:22:32.863 "supported_io_types": { 00:22:32.863 "read": true, 00:22:32.863 "write": true, 00:22:32.863 "unmap": true, 00:22:32.863 "flush": true, 00:22:32.863 "reset": true, 00:22:32.863 "nvme_admin": true, 00:22:32.863 "nvme_io": true, 00:22:32.863 "nvme_io_md": false, 00:22:32.863 "write_zeroes": true, 00:22:32.863 "zcopy": false, 00:22:32.863 "get_zone_info": false, 00:22:32.863 "zone_management": false, 00:22:32.863 "zone_append": false, 00:22:32.863 "compare": true, 00:22:32.863 "compare_and_write": false, 00:22:32.863 "abort": true, 00:22:32.863 "seek_hole": false, 00:22:32.863 "seek_data": false, 00:22:32.863 "copy": true, 00:22:32.863 "nvme_iov_md": false 00:22:32.863 }, 00:22:32.863 "driver_specific": { 00:22:32.863 "nvme": [ 00:22:32.863 { 00:22:32.863 "pci_address": "0000:00:11.0", 00:22:32.863 "trid": { 00:22:32.863 "trtype": "PCIe", 00:22:32.863 "traddr": "0000:00:11.0" 00:22:32.863 }, 00:22:32.863 "ctrlr_data": { 00:22:32.863 "cntlid": 0, 00:22:32.863 "vendor_id": "0x1b36", 00:22:32.863 "model_number": "QEMU NVMe Ctrl", 00:22:32.863 "serial_number": "12341", 00:22:32.863 "firmware_revision": "8.0.0", 00:22:32.863 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:32.863 "oacs": { 00:22:32.863 "security": 0, 00:22:32.863 "format": 1, 00:22:32.863 "firmware": 0, 00:22:32.863 "ns_manage": 1 00:22:32.863 }, 00:22:32.863 "multi_ctrlr": false, 00:22:32.863 "ana_reporting": false 00:22:32.863 }, 00:22:32.863 "vs": { 00:22:32.863 "nvme_version": "1.4" 00:22:32.863 }, 00:22:32.863 "ns_data": { 00:22:32.863 "id": 1, 00:22:32.863 "can_share": false 00:22:32.863 } 00:22:32.863 } 00:22:32.863 ], 00:22:32.863 "mp_policy": "active_passive" 00:22:32.863 } 00:22:32.863 } 00:22:32.863 ]' 00:22:32.863 15:18:10 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:32.863 15:18:10 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:32.863 15:18:10 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:33.122 15:18:10 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:33.123 15:18:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:33.123 15:18:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:22:33.123 15:18:10 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:33.123 15:18:10 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:33.123 15:18:10 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:33.123 15:18:10 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:33.123 15:18:10 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:33.123 15:18:11 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=cc85bdcc-f6d7-4334-9ddd-eb9680baf5fd 00:22:33.123 15:18:11 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:33.123 15:18:11 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cc85bdcc-f6d7-4334-9ddd-eb9680baf5fd 00:22:33.381 15:18:11 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:33.641 15:18:11 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=32828d38-88ce-42d5-ae8c-60f363f727c8 00:22:33.641 15:18:11 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 32828d38-88ce-42d5-ae8c-60f363f727c8 00:22:33.900 15:18:11 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=dd4001fa-e08c-442c-a27d-fcdc832a18c6 00:22:33.900 15:18:11 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:33.900 15:18:11 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dd4001fa-e08c-442c-a27d-fcdc832a18c6 00:22:33.900 15:18:11 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:33.900 15:18:11 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:33.900 15:18:11 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=dd4001fa-e08c-442c-a27d-fcdc832a18c6 00:22:33.900 15:18:11 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:33.900 15:18:11 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size dd4001fa-e08c-442c-a27d-fcdc832a18c6 00:22:33.900 15:18:11 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=dd4001fa-e08c-442c-a27d-fcdc832a18c6 00:22:33.900 15:18:11 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:33.900 15:18:11 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:33.900 15:18:11 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:33.900 15:18:11 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dd4001fa-e08c-442c-a27d-fcdc832a18c6 00:22:34.159 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:34.159 { 00:22:34.159 "name": "dd4001fa-e08c-442c-a27d-fcdc832a18c6", 00:22:34.159 "aliases": [ 00:22:34.159 "lvs/nvme0n1p0" 00:22:34.159 ], 00:22:34.159 "product_name": "Logical Volume", 00:22:34.159 "block_size": 4096, 00:22:34.159 "num_blocks": 26476544, 00:22:34.159 "uuid": "dd4001fa-e08c-442c-a27d-fcdc832a18c6", 00:22:34.159 "assigned_rate_limits": { 00:22:34.159 "rw_ios_per_sec": 0, 00:22:34.159 "rw_mbytes_per_sec": 0, 00:22:34.159 "r_mbytes_per_sec": 0, 00:22:34.159 "w_mbytes_per_sec": 0 00:22:34.159 }, 00:22:34.159 "claimed": false, 00:22:34.159 "zoned": false, 00:22:34.159 "supported_io_types": { 00:22:34.159 "read": true, 00:22:34.159 "write": true, 00:22:34.159 "unmap": true, 00:22:34.159 "flush": false, 00:22:34.159 "reset": true, 00:22:34.159 "nvme_admin": false, 00:22:34.159 "nvme_io": false, 00:22:34.159 "nvme_io_md": false, 00:22:34.159 "write_zeroes": true, 00:22:34.159 "zcopy": false, 00:22:34.159 "get_zone_info": false, 00:22:34.159 "zone_management": false, 00:22:34.159 "zone_append": false, 00:22:34.159 "compare": false, 00:22:34.159 "compare_and_write": false, 00:22:34.159 "abort": false, 00:22:34.159 "seek_hole": true, 00:22:34.159 "seek_data": true, 00:22:34.159 "copy": false, 00:22:34.159 "nvme_iov_md": false 00:22:34.159 }, 00:22:34.159 "driver_specific": { 00:22:34.159 "lvol": { 00:22:34.159 "lvol_store_uuid": "32828d38-88ce-42d5-ae8c-60f363f727c8", 00:22:34.159 "base_bdev": "nvme0n1", 00:22:34.159 "thin_provision": true, 00:22:34.159 "num_allocated_clusters": 0, 00:22:34.159 "snapshot": false, 00:22:34.159 "clone": false, 00:22:34.159 "esnap_clone": false 00:22:34.159 } 00:22:34.159 } 00:22:34.159 } 00:22:34.159 ]' 00:22:34.159 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:34.159 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:34.159 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:34.159 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:34.159 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:34.159 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:34.159 15:18:12 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:34.159 15:18:12 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:34.159 15:18:12 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:34.419 15:18:12 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:34.419 15:18:12 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:34.419 15:18:12 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size dd4001fa-e08c-442c-a27d-fcdc832a18c6 00:22:34.419 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=dd4001fa-e08c-442c-a27d-fcdc832a18c6 00:22:34.419 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:34.419 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:34.419 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:34.419 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dd4001fa-e08c-442c-a27d-fcdc832a18c6 00:22:34.678 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:34.678 { 00:22:34.678 "name": "dd4001fa-e08c-442c-a27d-fcdc832a18c6", 00:22:34.678 "aliases": [ 00:22:34.678 "lvs/nvme0n1p0" 00:22:34.678 ], 00:22:34.678 "product_name": "Logical Volume", 00:22:34.678 "block_size": 4096, 00:22:34.678 "num_blocks": 26476544, 00:22:34.678 "uuid": "dd4001fa-e08c-442c-a27d-fcdc832a18c6", 00:22:34.678 "assigned_rate_limits": { 00:22:34.678 "rw_ios_per_sec": 0, 00:22:34.678 "rw_mbytes_per_sec": 0, 00:22:34.678 "r_mbytes_per_sec": 0, 00:22:34.678 "w_mbytes_per_sec": 0 00:22:34.678 }, 00:22:34.678 "claimed": false, 00:22:34.678 "zoned": false, 00:22:34.678 "supported_io_types": { 00:22:34.678 "read": true, 00:22:34.678 "write": true, 00:22:34.678 "unmap": true, 00:22:34.678 "flush": false, 00:22:34.678 "reset": true, 00:22:34.678 "nvme_admin": false, 00:22:34.678 "nvme_io": false, 00:22:34.678 "nvme_io_md": false, 00:22:34.678 "write_zeroes": true, 00:22:34.678 "zcopy": false, 00:22:34.678 "get_zone_info": false, 00:22:34.678 "zone_management": false, 00:22:34.678 "zone_append": false, 00:22:34.678 "compare": false, 00:22:34.678 "compare_and_write": false, 00:22:34.678 "abort": false, 00:22:34.678 "seek_hole": true, 00:22:34.678 "seek_data": true, 00:22:34.678 "copy": false, 00:22:34.678 "nvme_iov_md": false 00:22:34.678 }, 00:22:34.678 "driver_specific": { 00:22:34.678 "lvol": { 00:22:34.678 "lvol_store_uuid": "32828d38-88ce-42d5-ae8c-60f363f727c8", 00:22:34.678 "base_bdev": "nvme0n1", 00:22:34.678 "thin_provision": true, 00:22:34.678 "num_allocated_clusters": 0, 00:22:34.678 "snapshot": false, 00:22:34.678 "clone": false, 00:22:34.678 "esnap_clone": false 00:22:34.678 } 00:22:34.678 } 00:22:34.678 } 00:22:34.678 ]' 00:22:34.678 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:34.678 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:34.678 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:34.678 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:34.678 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:34.678 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:34.678 15:18:12 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:34.678 15:18:12 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:34.957 15:18:12 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:34.957 15:18:12 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size dd4001fa-e08c-442c-a27d-fcdc832a18c6 00:22:34.957 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=dd4001fa-e08c-442c-a27d-fcdc832a18c6 00:22:34.957 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:34.957 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:34.957 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:34.957 15:18:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dd4001fa-e08c-442c-a27d-fcdc832a18c6 00:22:35.226 15:18:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:35.226 { 00:22:35.226 "name": "dd4001fa-e08c-442c-a27d-fcdc832a18c6", 00:22:35.226 "aliases": [ 00:22:35.226 "lvs/nvme0n1p0" 00:22:35.226 ], 00:22:35.226 "product_name": "Logical Volume", 00:22:35.226 "block_size": 4096, 00:22:35.226 "num_blocks": 26476544, 00:22:35.226 "uuid": "dd4001fa-e08c-442c-a27d-fcdc832a18c6", 00:22:35.226 "assigned_rate_limits": { 00:22:35.226 "rw_ios_per_sec": 0, 00:22:35.226 "rw_mbytes_per_sec": 0, 00:22:35.226 "r_mbytes_per_sec": 0, 00:22:35.226 "w_mbytes_per_sec": 0 00:22:35.226 }, 00:22:35.226 "claimed": false, 00:22:35.226 "zoned": false, 00:22:35.226 "supported_io_types": { 00:22:35.226 "read": true, 00:22:35.226 "write": true, 00:22:35.226 "unmap": true, 00:22:35.226 "flush": false, 00:22:35.226 "reset": true, 00:22:35.226 "nvme_admin": false, 00:22:35.226 "nvme_io": false, 00:22:35.226 "nvme_io_md": false, 00:22:35.226 "write_zeroes": true, 00:22:35.226 "zcopy": false, 00:22:35.226 "get_zone_info": false, 00:22:35.226 "zone_management": false, 00:22:35.226 "zone_append": false, 00:22:35.226 "compare": false, 00:22:35.226 "compare_and_write": false, 00:22:35.226 "abort": false, 00:22:35.226 "seek_hole": true, 00:22:35.226 "seek_data": true, 00:22:35.226 "copy": false, 00:22:35.226 "nvme_iov_md": false 00:22:35.226 }, 00:22:35.226 "driver_specific": { 00:22:35.226 "lvol": { 00:22:35.226 "lvol_store_uuid": "32828d38-88ce-42d5-ae8c-60f363f727c8", 00:22:35.226 "base_bdev": "nvme0n1", 00:22:35.226 "thin_provision": true, 00:22:35.226 "num_allocated_clusters": 0, 00:22:35.226 "snapshot": false, 00:22:35.226 "clone": false, 00:22:35.226 "esnap_clone": false 00:22:35.226 } 00:22:35.226 } 00:22:35.226 } 00:22:35.226 ]' 00:22:35.226 15:18:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:35.226 15:18:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:35.226 15:18:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:35.226 15:18:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:35.226 15:18:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:35.226 15:18:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:35.226 15:18:13 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:35.226 15:18:13 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d dd4001fa-e08c-442c-a27d-fcdc832a18c6 --l2p_dram_limit 10' 00:22:35.226 15:18:13 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:35.226 15:18:13 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:35.226 15:18:13 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:35.226 15:18:13 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:35.226 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:35.226 15:18:13 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dd4001fa-e08c-442c-a27d-fcdc832a18c6 --l2p_dram_limit 10 -c nvc0n1p0 00:22:35.486 [2024-07-15 15:18:13.394302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.486 [2024-07-15 15:18:13.394353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:35.486 [2024-07-15 15:18:13.394368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:35.486 [2024-07-15 15:18:13.394378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.486 [2024-07-15 15:18:13.394444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.486 [2024-07-15 15:18:13.394455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:35.486 [2024-07-15 15:18:13.394463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:35.486 [2024-07-15 15:18:13.394479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.486 [2024-07-15 15:18:13.394499] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:35.486 [2024-07-15 15:18:13.395652] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:35.486 [2024-07-15 15:18:13.395678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.486 [2024-07-15 15:18:13.395691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:35.486 [2024-07-15 15:18:13.395699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.187 ms 00:22:35.486 [2024-07-15 15:18:13.395708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.486 [2024-07-15 15:18:13.395775] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9bb54792-33fc-4be3-bb67-de335ccaec96 00:22:35.486 [2024-07-15 15:18:13.397147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.486 [2024-07-15 15:18:13.397169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:35.486 [2024-07-15 15:18:13.397180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:35.486 [2024-07-15 15:18:13.397187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.486 [2024-07-15 15:18:13.404471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.486 [2024-07-15 15:18:13.404500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:35.487 [2024-07-15 15:18:13.404514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.256 ms 00:22:35.487 [2024-07-15 15:18:13.404521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.487 [2024-07-15 15:18:13.404614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.487 [2024-07-15 15:18:13.404629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:35.487 [2024-07-15 15:18:13.404638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:35.487 [2024-07-15 15:18:13.404646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.487 [2024-07-15 15:18:13.404724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.487 [2024-07-15 15:18:13.404741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:35.487 [2024-07-15 15:18:13.404751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:35.487 [2024-07-15 15:18:13.404760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.487 [2024-07-15 15:18:13.404787] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:35.487 [2024-07-15 15:18:13.410708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.487 [2024-07-15 15:18:13.410742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:35.487 [2024-07-15 15:18:13.410752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.942 ms 00:22:35.487 [2024-07-15 15:18:13.410762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.487 [2024-07-15 15:18:13.410798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.487 [2024-07-15 15:18:13.410809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:35.487 [2024-07-15 15:18:13.410817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:35.487 [2024-07-15 15:18:13.410826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.487 [2024-07-15 15:18:13.410859] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:35.487 [2024-07-15 15:18:13.411022] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:35.487 [2024-07-15 15:18:13.411038] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:35.487 [2024-07-15 15:18:13.411052] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:35.487 [2024-07-15 15:18:13.411062] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:35.487 [2024-07-15 15:18:13.411072] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:35.487 [2024-07-15 15:18:13.411080] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:35.487 [2024-07-15 15:18:13.411090] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:35.487 [2024-07-15 15:18:13.411099] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:35.487 [2024-07-15 15:18:13.411109] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:35.487 [2024-07-15 15:18:13.411118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.487 [2024-07-15 15:18:13.411136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:35.487 [2024-07-15 15:18:13.411144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:22:35.487 [2024-07-15 15:18:13.411153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.487 [2024-07-15 15:18:13.411224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.487 [2024-07-15 15:18:13.411234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:35.487 [2024-07-15 15:18:13.411241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:35.487 [2024-07-15 15:18:13.411250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.487 [2024-07-15 15:18:13.411335] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:35.487 [2024-07-15 15:18:13.411353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:35.487 [2024-07-15 15:18:13.411369] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:35.487 [2024-07-15 15:18:13.411380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.487 [2024-07-15 15:18:13.411387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:35.487 [2024-07-15 15:18:13.411396] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:35.487 [2024-07-15 15:18:13.411403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:35.487 [2024-07-15 15:18:13.411412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:35.487 [2024-07-15 15:18:13.411418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:35.487 [2024-07-15 15:18:13.411427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:35.487 [2024-07-15 15:18:13.411434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:35.487 [2024-07-15 15:18:13.411444] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:35.487 [2024-07-15 15:18:13.411450] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:35.487 [2024-07-15 15:18:13.411458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:35.487 [2024-07-15 15:18:13.411465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:35.487 [2024-07-15 15:18:13.411473] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.487 [2024-07-15 15:18:13.411480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:35.487 [2024-07-15 15:18:13.411490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:35.487 [2024-07-15 15:18:13.411497] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.487 [2024-07-15 15:18:13.411506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:35.487 [2024-07-15 15:18:13.411512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:35.487 [2024-07-15 15:18:13.411520] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:35.487 [2024-07-15 15:18:13.411527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:35.487 [2024-07-15 15:18:13.411535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:35.487 [2024-07-15 15:18:13.411541] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:35.487 [2024-07-15 15:18:13.411550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:35.487 [2024-07-15 15:18:13.411556] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:35.487 [2024-07-15 15:18:13.411564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:35.487 [2024-07-15 15:18:13.411570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:35.487 [2024-07-15 15:18:13.411578] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:35.487 [2024-07-15 15:18:13.411584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:35.487 [2024-07-15 15:18:13.411593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:35.487 [2024-07-15 15:18:13.411601] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:35.487 [2024-07-15 15:18:13.411611] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:35.487 [2024-07-15 15:18:13.411617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:35.487 [2024-07-15 15:18:13.411625] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:35.487 [2024-07-15 15:18:13.411631] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:35.487 [2024-07-15 15:18:13.411640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:35.487 [2024-07-15 15:18:13.411647] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:35.487 [2024-07-15 15:18:13.411654] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.487 [2024-07-15 15:18:13.411661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:35.487 [2024-07-15 15:18:13.411669] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:35.487 [2024-07-15 15:18:13.411675] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.487 [2024-07-15 15:18:13.411683] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:35.487 [2024-07-15 15:18:13.411691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:35.487 [2024-07-15 15:18:13.411700] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:35.487 [2024-07-15 15:18:13.411707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.487 [2024-07-15 15:18:13.411715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:35.487 [2024-07-15 15:18:13.411722] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:35.487 [2024-07-15 15:18:13.411732] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:35.487 [2024-07-15 15:18:13.411739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:35.487 [2024-07-15 15:18:13.411747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:35.487 [2024-07-15 15:18:13.411754] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:35.487 [2024-07-15 15:18:13.411766] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:35.487 [2024-07-15 15:18:13.411775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:35.487 [2024-07-15 15:18:13.411787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:35.487 [2024-07-15 15:18:13.411795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:35.487 [2024-07-15 15:18:13.411803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:35.487 [2024-07-15 15:18:13.411811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:35.487 [2024-07-15 15:18:13.411819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:35.487 [2024-07-15 15:18:13.411826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:35.487 [2024-07-15 15:18:13.411836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:35.487 [2024-07-15 15:18:13.411843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:35.487 [2024-07-15 15:18:13.411852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:35.487 [2024-07-15 15:18:13.411859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:35.487 [2024-07-15 15:18:13.411870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:35.487 [2024-07-15 15:18:13.411877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:35.487 [2024-07-15 15:18:13.411885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:35.487 [2024-07-15 15:18:13.411892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:35.487 [2024-07-15 15:18:13.411900] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:35.487 [2024-07-15 15:18:13.411908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:35.488 [2024-07-15 15:18:13.411918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:35.488 [2024-07-15 15:18:13.411925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:35.488 [2024-07-15 15:18:13.411934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:35.488 [2024-07-15 15:18:13.411942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:35.488 [2024-07-15 15:18:13.411951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.488 [2024-07-15 15:18:13.411959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:35.488 [2024-07-15 15:18:13.411968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:22:35.488 [2024-07-15 15:18:13.411975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.488 [2024-07-15 15:18:13.412027] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:35.488 [2024-07-15 15:18:13.412037] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:38.780 [2024-07-15 15:18:16.871771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.781 [2024-07-15 15:18:16.871831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:38.781 [2024-07-15 15:18:16.871847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3466.401 ms 00:22:38.781 [2024-07-15 15:18:16.871856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.040 [2024-07-15 15:18:16.914025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.040 [2024-07-15 15:18:16.914078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:39.040 [2024-07-15 15:18:16.914093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.976 ms 00:22:39.040 [2024-07-15 15:18:16.914101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.041 [2024-07-15 15:18:16.914262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.041 [2024-07-15 15:18:16.914272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:39.041 [2024-07-15 15:18:16.914283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:39.041 [2024-07-15 15:18:16.914293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.041 [2024-07-15 15:18:16.961852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.041 [2024-07-15 15:18:16.961894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:39.041 [2024-07-15 15:18:16.961906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.612 ms 00:22:39.041 [2024-07-15 15:18:16.961913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.041 [2024-07-15 15:18:16.961962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.041 [2024-07-15 15:18:16.961977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:39.041 [2024-07-15 15:18:16.961986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:39.041 [2024-07-15 15:18:16.962004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.041 [2024-07-15 15:18:16.962515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.041 [2024-07-15 15:18:16.962534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:39.041 [2024-07-15 15:18:16.962546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:22:39.041 [2024-07-15 15:18:16.962553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.041 [2024-07-15 15:18:16.962667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.041 [2024-07-15 15:18:16.962680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:39.041 [2024-07-15 15:18:16.962693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:22:39.041 [2024-07-15 15:18:16.962701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.041 [2024-07-15 15:18:16.983716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.041 [2024-07-15 15:18:16.983756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:39.041 [2024-07-15 15:18:16.983782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.028 ms 00:22:39.041 [2024-07-15 15:18:16.983789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.041 [2024-07-15 15:18:16.997097] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:39.041 [2024-07-15 15:18:17.000245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.041 [2024-07-15 15:18:17.000286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:39.041 [2024-07-15 15:18:17.000297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.392 ms 00:22:39.041 [2024-07-15 15:18:17.000307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.041 [2024-07-15 15:18:17.107965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.041 [2024-07-15 15:18:17.108043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:39.041 [2024-07-15 15:18:17.108060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.822 ms 00:22:39.041 [2024-07-15 15:18:17.108071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.041 [2024-07-15 15:18:17.108301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.041 [2024-07-15 15:18:17.108325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:39.041 [2024-07-15 15:18:17.108335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:22:39.041 [2024-07-15 15:18:17.108348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.041 [2024-07-15 15:18:17.150778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.041 [2024-07-15 15:18:17.150837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:39.041 [2024-07-15 15:18:17.150853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.456 ms 00:22:39.041 [2024-07-15 15:18:17.150864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.301 [2024-07-15 15:18:17.191306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.301 [2024-07-15 15:18:17.191359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:39.301 [2024-07-15 15:18:17.191373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.459 ms 00:22:39.301 [2024-07-15 15:18:17.191382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.301 [2024-07-15 15:18:17.192196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.301 [2024-07-15 15:18:17.192221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:39.301 [2024-07-15 15:18:17.192230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:22:39.301 [2024-07-15 15:18:17.192243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.301 [2024-07-15 15:18:17.306121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.301 [2024-07-15 15:18:17.306181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:39.301 [2024-07-15 15:18:17.306196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.037 ms 00:22:39.301 [2024-07-15 15:18:17.306210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.301 [2024-07-15 15:18:17.345705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.301 [2024-07-15 15:18:17.345775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:39.301 [2024-07-15 15:18:17.345790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.521 ms 00:22:39.301 [2024-07-15 15:18:17.345800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.301 [2024-07-15 15:18:17.386079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.301 [2024-07-15 15:18:17.386135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:39.301 [2024-07-15 15:18:17.386148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.292 ms 00:22:39.301 [2024-07-15 15:18:17.386157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.562 [2024-07-15 15:18:17.424935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.562 [2024-07-15 15:18:17.424981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:39.562 [2024-07-15 15:18:17.425003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.807 ms 00:22:39.562 [2024-07-15 15:18:17.425013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.562 [2024-07-15 15:18:17.425066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.562 [2024-07-15 15:18:17.425087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:39.562 [2024-07-15 15:18:17.425095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:39.562 [2024-07-15 15:18:17.425108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.562 [2024-07-15 15:18:17.425194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.562 [2024-07-15 15:18:17.425206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:39.562 [2024-07-15 15:18:17.425217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:39.562 [2024-07-15 15:18:17.425225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.562 [2024-07-15 15:18:17.426288] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4039.259 ms, result 0 00:22:39.562 { 00:22:39.562 "name": "ftl0", 00:22:39.562 "uuid": "9bb54792-33fc-4be3-bb67-de335ccaec96" 00:22:39.562 } 00:22:39.562 15:18:17 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:39.562 15:18:17 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:39.562 15:18:17 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:39.562 15:18:17 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:39.822 [2024-07-15 15:18:17.804929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.822 [2024-07-15 15:18:17.805096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:39.822 [2024-07-15 15:18:17.805138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:39.822 [2024-07-15 15:18:17.805161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.822 [2024-07-15 15:18:17.805205] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:39.822 [2024-07-15 15:18:17.809272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.822 [2024-07-15 15:18:17.809348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:39.822 [2024-07-15 15:18:17.809394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.023 ms 00:22:39.822 [2024-07-15 15:18:17.809417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.822 [2024-07-15 15:18:17.809676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.822 [2024-07-15 15:18:17.809726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:39.822 [2024-07-15 15:18:17.809775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:22:39.822 [2024-07-15 15:18:17.809812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.822 [2024-07-15 15:18:17.812384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.822 [2024-07-15 15:18:17.812413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:39.822 [2024-07-15 15:18:17.812422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.536 ms 00:22:39.822 [2024-07-15 15:18:17.812432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.822 [2024-07-15 15:18:17.817368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.822 [2024-07-15 15:18:17.817404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:39.822 [2024-07-15 15:18:17.817415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.928 ms 00:22:39.822 [2024-07-15 15:18:17.817424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.822 [2024-07-15 15:18:17.855363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.822 [2024-07-15 15:18:17.855402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:39.822 [2024-07-15 15:18:17.855414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.939 ms 00:22:39.822 [2024-07-15 15:18:17.855424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.822 [2024-07-15 15:18:17.877895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.822 [2024-07-15 15:18:17.877940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:39.822 [2024-07-15 15:18:17.877953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.476 ms 00:22:39.822 [2024-07-15 15:18:17.877963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.822 [2024-07-15 15:18:17.878130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.822 [2024-07-15 15:18:17.878145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:39.822 [2024-07-15 15:18:17.878154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:22:39.822 [2024-07-15 15:18:17.878163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.822 [2024-07-15 15:18:17.915367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.822 [2024-07-15 15:18:17.915411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:39.822 [2024-07-15 15:18:17.915422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.259 ms 00:22:39.822 [2024-07-15 15:18:17.915432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.084 [2024-07-15 15:18:17.953373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.084 [2024-07-15 15:18:17.953424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:40.084 [2024-07-15 15:18:17.953437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.971 ms 00:22:40.084 [2024-07-15 15:18:17.953447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.084 [2024-07-15 15:18:17.991626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.084 [2024-07-15 15:18:17.991671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:40.084 [2024-07-15 15:18:17.991683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.211 ms 00:22:40.084 [2024-07-15 15:18:17.991692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.084 [2024-07-15 15:18:18.029245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.084 [2024-07-15 15:18:18.029290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:40.084 [2024-07-15 15:18:18.029302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.539 ms 00:22:40.084 [2024-07-15 15:18:18.029311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.084 [2024-07-15 15:18:18.029349] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:40.084 [2024-07-15 15:18:18.029367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.029982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:40.084 [2024-07-15 15:18:18.030007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:40.085 [2024-07-15 15:18:18.030395] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:40.085 [2024-07-15 15:18:18.030407] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9bb54792-33fc-4be3-bb67-de335ccaec96 00:22:40.085 [2024-07-15 15:18:18.030419] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:40.085 [2024-07-15 15:18:18.030428] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:40.085 [2024-07-15 15:18:18.030440] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:40.085 [2024-07-15 15:18:18.030449] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:40.085 [2024-07-15 15:18:18.030459] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:40.085 [2024-07-15 15:18:18.030468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:40.085 [2024-07-15 15:18:18.030485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:40.085 [2024-07-15 15:18:18.030493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:40.085 [2024-07-15 15:18:18.030501] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:40.085 [2024-07-15 15:18:18.030510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.085 [2024-07-15 15:18:18.030521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:40.085 [2024-07-15 15:18:18.030532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:22:40.085 [2024-07-15 15:18:18.030541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.085 [2024-07-15 15:18:18.051028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.085 [2024-07-15 15:18:18.051077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:40.085 [2024-07-15 15:18:18.051089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.475 ms 00:22:40.085 [2024-07-15 15:18:18.051099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.085 [2024-07-15 15:18:18.051607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.085 [2024-07-15 15:18:18.051623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:40.085 [2024-07-15 15:18:18.051632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:22:40.085 [2024-07-15 15:18:18.051644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.085 [2024-07-15 15:18:18.114100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.085 [2024-07-15 15:18:18.114156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:40.085 [2024-07-15 15:18:18.114169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.085 [2024-07-15 15:18:18.114179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.085 [2024-07-15 15:18:18.114249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.085 [2024-07-15 15:18:18.114260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:40.085 [2024-07-15 15:18:18.114268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.085 [2024-07-15 15:18:18.114280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.085 [2024-07-15 15:18:18.114371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.085 [2024-07-15 15:18:18.114387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:40.085 [2024-07-15 15:18:18.114395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.085 [2024-07-15 15:18:18.114404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.085 [2024-07-15 15:18:18.114423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.085 [2024-07-15 15:18:18.114436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:40.085 [2024-07-15 15:18:18.114444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.085 [2024-07-15 15:18:18.114453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.346 [2024-07-15 15:18:18.241416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.346 [2024-07-15 15:18:18.241473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:40.346 [2024-07-15 15:18:18.241487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.346 [2024-07-15 15:18:18.241498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.346 [2024-07-15 15:18:18.349841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.346 [2024-07-15 15:18:18.349902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:40.346 [2024-07-15 15:18:18.349916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.346 [2024-07-15 15:18:18.349929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.346 [2024-07-15 15:18:18.350040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.346 [2024-07-15 15:18:18.350057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:40.346 [2024-07-15 15:18:18.350066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.346 [2024-07-15 15:18:18.350076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.346 [2024-07-15 15:18:18.350138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.346 [2024-07-15 15:18:18.350153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:40.346 [2024-07-15 15:18:18.350163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.346 [2024-07-15 15:18:18.350174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.346 [2024-07-15 15:18:18.350286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.346 [2024-07-15 15:18:18.350302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:40.346 [2024-07-15 15:18:18.350311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.346 [2024-07-15 15:18:18.350321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.346 [2024-07-15 15:18:18.350361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.346 [2024-07-15 15:18:18.350376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:40.346 [2024-07-15 15:18:18.350385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.346 [2024-07-15 15:18:18.350396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.346 [2024-07-15 15:18:18.350437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.346 [2024-07-15 15:18:18.350451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:40.346 [2024-07-15 15:18:18.350461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.346 [2024-07-15 15:18:18.350484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.346 [2024-07-15 15:18:18.350556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.346 [2024-07-15 15:18:18.350573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:40.346 [2024-07-15 15:18:18.350583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.346 [2024-07-15 15:18:18.350593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.346 [2024-07-15 15:18:18.350742] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 546.825 ms, result 0 00:22:40.346 true 00:22:40.346 15:18:18 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 82353 00:22:40.346 15:18:18 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 82353 ']' 00:22:40.346 15:18:18 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 82353 00:22:40.346 15:18:18 ftl.ftl_restore -- common/autotest_common.sh@953 -- # uname 00:22:40.346 15:18:18 ftl.ftl_restore -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.346 15:18:18 ftl.ftl_restore -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82353 00:22:40.346 killing process with pid 82353 00:22:40.346 15:18:18 ftl.ftl_restore -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:40.346 15:18:18 ftl.ftl_restore -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:40.346 15:18:18 ftl.ftl_restore -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82353' 00:22:40.346 15:18:18 ftl.ftl_restore -- common/autotest_common.sh@967 -- # kill 82353 00:22:40.346 15:18:18 ftl.ftl_restore -- common/autotest_common.sh@972 -- # wait 82353 00:22:48.521 15:18:25 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:50.465 262144+0 records in 00:22:50.465 262144+0 records out 00:22:50.465 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.4682 s, 310 MB/s 00:22:50.465 15:18:28 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:52.372 15:18:30 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:52.372 [2024-07-15 15:18:30.366630] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:22:52.372 [2024-07-15 15:18:30.366742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82628 ] 00:22:52.630 [2024-07-15 15:18:30.529197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.888 [2024-07-15 15:18:30.759222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.147 [2024-07-15 15:18:31.151166] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:53.147 [2024-07-15 15:18:31.151232] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:53.407 [2024-07-15 15:18:31.307461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.407 [2024-07-15 15:18:31.307526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:53.407 [2024-07-15 15:18:31.307541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:53.407 [2024-07-15 15:18:31.307549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.407 [2024-07-15 15:18:31.307621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.407 [2024-07-15 15:18:31.307636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:53.407 [2024-07-15 15:18:31.307645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:53.407 [2024-07-15 15:18:31.307656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.407 [2024-07-15 15:18:31.307679] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:53.407 [2024-07-15 15:18:31.308956] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:53.407 [2024-07-15 15:18:31.309016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.407 [2024-07-15 15:18:31.309031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:53.407 [2024-07-15 15:18:31.309041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:22:53.407 [2024-07-15 15:18:31.309066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.407 [2024-07-15 15:18:31.310610] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:53.407 [2024-07-15 15:18:31.332327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.407 [2024-07-15 15:18:31.332388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:53.407 [2024-07-15 15:18:31.332402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.757 ms 00:22:53.407 [2024-07-15 15:18:31.332410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.407 [2024-07-15 15:18:31.332504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.407 [2024-07-15 15:18:31.332516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:53.407 [2024-07-15 15:18:31.332527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:53.407 [2024-07-15 15:18:31.332534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.407 [2024-07-15 15:18:31.339315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.407 [2024-07-15 15:18:31.339353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:53.407 [2024-07-15 15:18:31.339363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.712 ms 00:22:53.407 [2024-07-15 15:18:31.339371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.407 [2024-07-15 15:18:31.339450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.407 [2024-07-15 15:18:31.339469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:53.407 [2024-07-15 15:18:31.339479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:53.407 [2024-07-15 15:18:31.339486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.407 [2024-07-15 15:18:31.339539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.407 [2024-07-15 15:18:31.339550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:53.407 [2024-07-15 15:18:31.339560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:53.407 [2024-07-15 15:18:31.339567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.407 [2024-07-15 15:18:31.339591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:53.407 [2024-07-15 15:18:31.344794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.407 [2024-07-15 15:18:31.344827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:53.407 [2024-07-15 15:18:31.344837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.220 ms 00:22:53.407 [2024-07-15 15:18:31.344844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.407 [2024-07-15 15:18:31.344878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.407 [2024-07-15 15:18:31.344888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:53.407 [2024-07-15 15:18:31.344895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:53.407 [2024-07-15 15:18:31.344902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.407 [2024-07-15 15:18:31.344948] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:53.407 [2024-07-15 15:18:31.344970] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:53.407 [2024-07-15 15:18:31.345022] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:53.407 [2024-07-15 15:18:31.345043] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:53.408 [2024-07-15 15:18:31.345137] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:53.408 [2024-07-15 15:18:31.345148] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:53.408 [2024-07-15 15:18:31.345158] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:53.408 [2024-07-15 15:18:31.345169] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:53.408 [2024-07-15 15:18:31.345194] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:53.408 [2024-07-15 15:18:31.345202] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:53.408 [2024-07-15 15:18:31.345211] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:53.408 [2024-07-15 15:18:31.345218] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:53.408 [2024-07-15 15:18:31.345225] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:53.408 [2024-07-15 15:18:31.345233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.408 [2024-07-15 15:18:31.345244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:53.408 [2024-07-15 15:18:31.345252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:22:53.408 [2024-07-15 15:18:31.345259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.408 [2024-07-15 15:18:31.345327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.408 [2024-07-15 15:18:31.345336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:53.408 [2024-07-15 15:18:31.345346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:53.408 [2024-07-15 15:18:31.345365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.408 [2024-07-15 15:18:31.345445] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:53.408 [2024-07-15 15:18:31.345455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:53.408 [2024-07-15 15:18:31.345465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:53.408 [2024-07-15 15:18:31.345490] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.408 [2024-07-15 15:18:31.345497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:53.408 [2024-07-15 15:18:31.345504] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:53.408 [2024-07-15 15:18:31.345510] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:53.408 [2024-07-15 15:18:31.345517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:53.408 [2024-07-15 15:18:31.345525] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:53.408 [2024-07-15 15:18:31.345532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:53.408 [2024-07-15 15:18:31.345539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:53.408 [2024-07-15 15:18:31.345545] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:53.408 [2024-07-15 15:18:31.345554] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:53.408 [2024-07-15 15:18:31.345560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:53.408 [2024-07-15 15:18:31.345567] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:53.408 [2024-07-15 15:18:31.345574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.408 [2024-07-15 15:18:31.345580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:53.408 [2024-07-15 15:18:31.345586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:53.408 [2024-07-15 15:18:31.345593] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.408 [2024-07-15 15:18:31.345599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:53.408 [2024-07-15 15:18:31.345622] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:53.408 [2024-07-15 15:18:31.345629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.408 [2024-07-15 15:18:31.345637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:53.408 [2024-07-15 15:18:31.345643] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:53.408 [2024-07-15 15:18:31.345650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.408 [2024-07-15 15:18:31.345656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:53.408 [2024-07-15 15:18:31.345663] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:53.408 [2024-07-15 15:18:31.345670] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.408 [2024-07-15 15:18:31.345676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:53.408 [2024-07-15 15:18:31.345683] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:53.408 [2024-07-15 15:18:31.345690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.408 [2024-07-15 15:18:31.345696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:53.408 [2024-07-15 15:18:31.345703] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:53.408 [2024-07-15 15:18:31.345709] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:53.408 [2024-07-15 15:18:31.345716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:53.408 [2024-07-15 15:18:31.345723] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:53.408 [2024-07-15 15:18:31.345729] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:53.408 [2024-07-15 15:18:31.345735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:53.408 [2024-07-15 15:18:31.345741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:53.408 [2024-07-15 15:18:31.345748] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.408 [2024-07-15 15:18:31.345754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:53.408 [2024-07-15 15:18:31.345760] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:53.408 [2024-07-15 15:18:31.345766] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.408 [2024-07-15 15:18:31.345772] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:53.408 [2024-07-15 15:18:31.345780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:53.408 [2024-07-15 15:18:31.345788] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:53.408 [2024-07-15 15:18:31.345794] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.408 [2024-07-15 15:18:31.345801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:53.408 [2024-07-15 15:18:31.345808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:53.408 [2024-07-15 15:18:31.345814] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:53.408 [2024-07-15 15:18:31.345820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:53.408 [2024-07-15 15:18:31.345826] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:53.408 [2024-07-15 15:18:31.345834] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:53.408 [2024-07-15 15:18:31.345842] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:53.408 [2024-07-15 15:18:31.345851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:53.408 [2024-07-15 15:18:31.345860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:53.408 [2024-07-15 15:18:31.345868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:53.408 [2024-07-15 15:18:31.345875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:53.408 [2024-07-15 15:18:31.345883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:53.408 [2024-07-15 15:18:31.345890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:53.408 [2024-07-15 15:18:31.345897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:53.408 [2024-07-15 15:18:31.345905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:53.408 [2024-07-15 15:18:31.345912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:53.408 [2024-07-15 15:18:31.345918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:53.408 [2024-07-15 15:18:31.345925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:53.408 [2024-07-15 15:18:31.345932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:53.408 [2024-07-15 15:18:31.345939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:53.408 [2024-07-15 15:18:31.345946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:53.408 [2024-07-15 15:18:31.345953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:53.408 [2024-07-15 15:18:31.345960] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:53.408 [2024-07-15 15:18:31.345967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:53.408 [2024-07-15 15:18:31.345975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:53.408 [2024-07-15 15:18:31.345982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:53.408 [2024-07-15 15:18:31.345989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:53.408 [2024-07-15 15:18:31.345996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:53.408 [2024-07-15 15:18:31.346003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.408 [2024-07-15 15:18:31.346016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:53.408 [2024-07-15 15:18:31.346025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:22:53.408 [2024-07-15 15:18:31.346041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.408 [2024-07-15 15:18:31.399258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.408 [2024-07-15 15:18:31.399309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:53.408 [2024-07-15 15:18:31.399323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.267 ms 00:22:53.408 [2024-07-15 15:18:31.399331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.408 [2024-07-15 15:18:31.399431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.408 [2024-07-15 15:18:31.399439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:53.408 [2024-07-15 15:18:31.399447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:53.408 [2024-07-15 15:18:31.399454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.408 [2024-07-15 15:18:31.450506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.409 [2024-07-15 15:18:31.450559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:53.409 [2024-07-15 15:18:31.450572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.082 ms 00:22:53.409 [2024-07-15 15:18:31.450597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.409 [2024-07-15 15:18:31.450663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.409 [2024-07-15 15:18:31.450673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:53.409 [2024-07-15 15:18:31.450682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:53.409 [2024-07-15 15:18:31.450690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.409 [2024-07-15 15:18:31.451187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.409 [2024-07-15 15:18:31.451200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:53.409 [2024-07-15 15:18:31.451210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:22:53.409 [2024-07-15 15:18:31.451218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.409 [2024-07-15 15:18:31.451341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.409 [2024-07-15 15:18:31.451361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:53.409 [2024-07-15 15:18:31.451371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:22:53.409 [2024-07-15 15:18:31.451378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.409 [2024-07-15 15:18:31.471885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.409 [2024-07-15 15:18:31.471922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:53.409 [2024-07-15 15:18:31.471934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.523 ms 00:22:53.409 [2024-07-15 15:18:31.471942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.409 [2024-07-15 15:18:31.492976] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:53.409 [2024-07-15 15:18:31.493038] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:53.409 [2024-07-15 15:18:31.493056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.409 [2024-07-15 15:18:31.493066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:53.409 [2024-07-15 15:18:31.493077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.019 ms 00:22:53.409 [2024-07-15 15:18:31.493085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.669 [2024-07-15 15:18:31.524397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.669 [2024-07-15 15:18:31.524442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:53.669 [2024-07-15 15:18:31.524456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.319 ms 00:22:53.669 [2024-07-15 15:18:31.524463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.669 [2024-07-15 15:18:31.544966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.669 [2024-07-15 15:18:31.545111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:53.669 [2024-07-15 15:18:31.545126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.473 ms 00:22:53.669 [2024-07-15 15:18:31.545134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.669 [2024-07-15 15:18:31.565218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.669 [2024-07-15 15:18:31.565286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:53.669 [2024-07-15 15:18:31.565301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.059 ms 00:22:53.669 [2024-07-15 15:18:31.565308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.669 [2024-07-15 15:18:31.566203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.669 [2024-07-15 15:18:31.566231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:53.669 [2024-07-15 15:18:31.566240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:22:53.669 [2024-07-15 15:18:31.566248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.669 [2024-07-15 15:18:31.655459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.669 [2024-07-15 15:18:31.655523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:53.669 [2024-07-15 15:18:31.655538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.356 ms 00:22:53.669 [2024-07-15 15:18:31.655546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.669 [2024-07-15 15:18:31.667763] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:53.669 [2024-07-15 15:18:31.670939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.669 [2024-07-15 15:18:31.670970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:53.669 [2024-07-15 15:18:31.670981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.352 ms 00:22:53.669 [2024-07-15 15:18:31.670997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.669 [2024-07-15 15:18:31.671105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.669 [2024-07-15 15:18:31.671116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:53.669 [2024-07-15 15:18:31.671124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:53.669 [2024-07-15 15:18:31.671132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.669 [2024-07-15 15:18:31.671204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.669 [2024-07-15 15:18:31.671215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:53.669 [2024-07-15 15:18:31.671226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:53.670 [2024-07-15 15:18:31.671233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.670 [2024-07-15 15:18:31.671252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.670 [2024-07-15 15:18:31.671260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:53.670 [2024-07-15 15:18:31.671268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:53.670 [2024-07-15 15:18:31.671275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.670 [2024-07-15 15:18:31.671304] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:53.670 [2024-07-15 15:18:31.671313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.670 [2024-07-15 15:18:31.671321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:53.670 [2024-07-15 15:18:31.671330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:53.670 [2024-07-15 15:18:31.671338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.670 [2024-07-15 15:18:31.711435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.670 [2024-07-15 15:18:31.711494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:53.670 [2024-07-15 15:18:31.711508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.153 ms 00:22:53.670 [2024-07-15 15:18:31.711517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.670 [2024-07-15 15:18:31.711609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.670 [2024-07-15 15:18:31.711620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:53.670 [2024-07-15 15:18:31.711637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:53.670 [2024-07-15 15:18:31.711645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.670 [2024-07-15 15:18:31.712967] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.749 ms, result 0 00:23:25.213  Copying: 33/1024 [MB] (33 MBps) Copying: 65/1024 [MB] (32 MBps) Copying: 94/1024 [MB] (28 MBps) Copying: 126/1024 [MB] (32 MBps) Copying: 158/1024 [MB] (31 MBps) Copying: 189/1024 [MB] (31 MBps) Copying: 221/1024 [MB] (31 MBps) Copying: 253/1024 [MB] (32 MBps) Copying: 287/1024 [MB] (33 MBps) Copying: 318/1024 [MB] (30 MBps) Copying: 349/1024 [MB] (31 MBps) Copying: 381/1024 [MB] (31 MBps) Copying: 413/1024 [MB] (31 MBps) Copying: 447/1024 [MB] (34 MBps) Copying: 483/1024 [MB] (35 MBps) Copying: 515/1024 [MB] (32 MBps) Copying: 548/1024 [MB] (32 MBps) Copying: 581/1024 [MB] (33 MBps) Copying: 611/1024 [MB] (30 MBps) Copying: 643/1024 [MB] (31 MBps) Copying: 675/1024 [MB] (31 MBps) Copying: 710/1024 [MB] (34 MBps) Copying: 742/1024 [MB] (32 MBps) Copying: 775/1024 [MB] (32 MBps) Copying: 810/1024 [MB] (35 MBps) Copying: 846/1024 [MB] (35 MBps) Copying: 879/1024 [MB] (33 MBps) Copying: 911/1024 [MB] (31 MBps) Copying: 943/1024 [MB] (31 MBps) Copying: 974/1024 [MB] (31 MBps) Copying: 1007/1024 [MB] (32 MBps) Copying: 1024/1024 [MB] (average 32 MBps)[2024-07-15 15:19:03.171596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.213 [2024-07-15 15:19:03.171671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:25.213 [2024-07-15 15:19:03.171687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:25.213 [2024-07-15 15:19:03.171695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.213 [2024-07-15 15:19:03.171716] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:25.213 [2024-07-15 15:19:03.175935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.213 [2024-07-15 15:19:03.175970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:25.213 [2024-07-15 15:19:03.175980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.212 ms 00:23:25.213 [2024-07-15 15:19:03.175986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.213 [2024-07-15 15:19:03.177925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.213 [2024-07-15 15:19:03.177962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:25.213 [2024-07-15 15:19:03.177979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:23:25.213 [2024-07-15 15:19:03.177987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.213 [2024-07-15 15:19:03.194888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.213 [2024-07-15 15:19:03.194927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:25.213 [2024-07-15 15:19:03.194940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.896 ms 00:23:25.213 [2024-07-15 15:19:03.194948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.213 [2024-07-15 15:19:03.200277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.213 [2024-07-15 15:19:03.200306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:25.213 [2024-07-15 15:19:03.200320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.308 ms 00:23:25.213 [2024-07-15 15:19:03.200326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.213 [2024-07-15 15:19:03.239351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.213 [2024-07-15 15:19:03.239388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:25.213 [2024-07-15 15:19:03.239399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.036 ms 00:23:25.213 [2024-07-15 15:19:03.239406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.213 [2024-07-15 15:19:03.261595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.213 [2024-07-15 15:19:03.261632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:25.213 [2024-07-15 15:19:03.261644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.196 ms 00:23:25.213 [2024-07-15 15:19:03.261650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.213 [2024-07-15 15:19:03.261779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.213 [2024-07-15 15:19:03.261790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:25.213 [2024-07-15 15:19:03.261799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:23:25.213 [2024-07-15 15:19:03.261805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.213 [2024-07-15 15:19:03.300253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.213 [2024-07-15 15:19:03.300296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:25.213 [2024-07-15 15:19:03.300308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.503 ms 00:23:25.213 [2024-07-15 15:19:03.300315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.471 [2024-07-15 15:19:03.338752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.471 [2024-07-15 15:19:03.338803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:25.471 [2024-07-15 15:19:03.338816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.473 ms 00:23:25.471 [2024-07-15 15:19:03.338824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.471 [2024-07-15 15:19:03.376029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.471 [2024-07-15 15:19:03.376075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:25.471 [2024-07-15 15:19:03.376087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.229 ms 00:23:25.471 [2024-07-15 15:19:03.376108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.471 [2024-07-15 15:19:03.413597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.471 [2024-07-15 15:19:03.413643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:25.471 [2024-07-15 15:19:03.413656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.487 ms 00:23:25.471 [2024-07-15 15:19:03.413663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.471 [2024-07-15 15:19:03.413697] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:25.471 [2024-07-15 15:19:03.413711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:25.471 [2024-07-15 15:19:03.413852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.413983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:25.472 [2024-07-15 15:19:03.414554] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:25.472 [2024-07-15 15:19:03.414567] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9bb54792-33fc-4be3-bb67-de335ccaec96 00:23:25.472 [2024-07-15 15:19:03.414580] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:25.472 [2024-07-15 15:19:03.414592] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:25.472 [2024-07-15 15:19:03.414604] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:25.472 [2024-07-15 15:19:03.414623] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:25.472 [2024-07-15 15:19:03.414634] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:25.472 [2024-07-15 15:19:03.414645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:25.472 [2024-07-15 15:19:03.414653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:25.472 [2024-07-15 15:19:03.414660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:25.472 [2024-07-15 15:19:03.414668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:25.472 [2024-07-15 15:19:03.414677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.472 [2024-07-15 15:19:03.414686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:25.472 [2024-07-15 15:19:03.414695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:23:25.472 [2024-07-15 15:19:03.414703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.472 [2024-07-15 15:19:03.434792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.472 [2024-07-15 15:19:03.434838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:25.473 [2024-07-15 15:19:03.434850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.093 ms 00:23:25.473 [2024-07-15 15:19:03.434869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.473 [2024-07-15 15:19:03.435463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.473 [2024-07-15 15:19:03.435482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:25.473 [2024-07-15 15:19:03.435491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:23:25.473 [2024-07-15 15:19:03.435498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.473 [2024-07-15 15:19:03.479495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.473 [2024-07-15 15:19:03.479542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:25.473 [2024-07-15 15:19:03.479554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.473 [2024-07-15 15:19:03.479562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.473 [2024-07-15 15:19:03.479620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.473 [2024-07-15 15:19:03.479628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:25.473 [2024-07-15 15:19:03.479636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.473 [2024-07-15 15:19:03.479643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.473 [2024-07-15 15:19:03.479706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.473 [2024-07-15 15:19:03.479721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:25.473 [2024-07-15 15:19:03.479729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.473 [2024-07-15 15:19:03.479736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.473 [2024-07-15 15:19:03.479752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.473 [2024-07-15 15:19:03.479759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:25.473 [2024-07-15 15:19:03.479767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.473 [2024-07-15 15:19:03.479773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.731 [2024-07-15 15:19:03.603287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.731 [2024-07-15 15:19:03.603350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:25.731 [2024-07-15 15:19:03.603362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.731 [2024-07-15 15:19:03.603370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.731 [2024-07-15 15:19:03.710247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.731 [2024-07-15 15:19:03.710299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:25.731 [2024-07-15 15:19:03.710311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.731 [2024-07-15 15:19:03.710319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.731 [2024-07-15 15:19:03.710382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.731 [2024-07-15 15:19:03.710391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:25.731 [2024-07-15 15:19:03.710399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.731 [2024-07-15 15:19:03.710411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.731 [2024-07-15 15:19:03.710442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.731 [2024-07-15 15:19:03.710450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:25.731 [2024-07-15 15:19:03.710457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.731 [2024-07-15 15:19:03.710463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.731 [2024-07-15 15:19:03.710585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.731 [2024-07-15 15:19:03.710600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:25.731 [2024-07-15 15:19:03.710612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.731 [2024-07-15 15:19:03.710627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.731 [2024-07-15 15:19:03.710670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.731 [2024-07-15 15:19:03.710682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:25.731 [2024-07-15 15:19:03.710689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.731 [2024-07-15 15:19:03.710696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.731 [2024-07-15 15:19:03.710733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.731 [2024-07-15 15:19:03.710741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:25.731 [2024-07-15 15:19:03.710749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.731 [2024-07-15 15:19:03.710756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.731 [2024-07-15 15:19:03.710800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.731 [2024-07-15 15:19:03.710809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:25.731 [2024-07-15 15:19:03.710817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.731 [2024-07-15 15:19:03.710824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.731 [2024-07-15 15:19:03.710939] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 540.353 ms, result 0 00:23:27.634 00:23:27.634 00:23:27.634 15:19:05 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:27.634 [2024-07-15 15:19:05.601413] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:23:27.634 [2024-07-15 15:19:05.601692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82982 ] 00:23:27.892 [2024-07-15 15:19:05.785565] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.151 [2024-07-15 15:19:06.017093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.409 [2024-07-15 15:19:06.415251] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:28.409 [2024-07-15 15:19:06.415408] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:28.668 [2024-07-15 15:19:06.571280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.668 [2024-07-15 15:19:06.571331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:28.669 [2024-07-15 15:19:06.571346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:28.669 [2024-07-15 15:19:06.571354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.669 [2024-07-15 15:19:06.571409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.669 [2024-07-15 15:19:06.571421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:28.669 [2024-07-15 15:19:06.571430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:28.669 [2024-07-15 15:19:06.571439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.669 [2024-07-15 15:19:06.571459] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:28.669 [2024-07-15 15:19:06.572591] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:28.669 [2024-07-15 15:19:06.572616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.669 [2024-07-15 15:19:06.572627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:28.669 [2024-07-15 15:19:06.572635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.164 ms 00:23:28.669 [2024-07-15 15:19:06.572643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.669 [2024-07-15 15:19:06.574025] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:28.669 [2024-07-15 15:19:06.594300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.669 [2024-07-15 15:19:06.594333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:28.669 [2024-07-15 15:19:06.594346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.314 ms 00:23:28.669 [2024-07-15 15:19:06.594354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.669 [2024-07-15 15:19:06.594421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.669 [2024-07-15 15:19:06.594431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:28.669 [2024-07-15 15:19:06.594442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:28.669 [2024-07-15 15:19:06.594450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.669 [2024-07-15 15:19:06.601350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.669 [2024-07-15 15:19:06.601378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:28.669 [2024-07-15 15:19:06.601388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.848 ms 00:23:28.669 [2024-07-15 15:19:06.601396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.669 [2024-07-15 15:19:06.601472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.669 [2024-07-15 15:19:06.601488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:28.669 [2024-07-15 15:19:06.601496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:28.669 [2024-07-15 15:19:06.601503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.669 [2024-07-15 15:19:06.601548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.669 [2024-07-15 15:19:06.601557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:28.669 [2024-07-15 15:19:06.601565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:28.669 [2024-07-15 15:19:06.601572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.669 [2024-07-15 15:19:06.601597] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:28.669 [2024-07-15 15:19:06.607554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.669 [2024-07-15 15:19:06.607585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:28.669 [2024-07-15 15:19:06.607596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.977 ms 00:23:28.669 [2024-07-15 15:19:06.607604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.669 [2024-07-15 15:19:06.607640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.669 [2024-07-15 15:19:06.607650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:28.669 [2024-07-15 15:19:06.607659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:28.669 [2024-07-15 15:19:06.607667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.669 [2024-07-15 15:19:06.607725] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:28.669 [2024-07-15 15:19:06.607746] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:28.669 [2024-07-15 15:19:06.607779] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:28.669 [2024-07-15 15:19:06.607797] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:28.669 [2024-07-15 15:19:06.607880] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:28.669 [2024-07-15 15:19:06.607890] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:28.669 [2024-07-15 15:19:06.607900] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:28.669 [2024-07-15 15:19:06.607910] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:28.669 [2024-07-15 15:19:06.607919] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:28.669 [2024-07-15 15:19:06.607926] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:28.669 [2024-07-15 15:19:06.607935] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:28.669 [2024-07-15 15:19:06.607941] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:28.669 [2024-07-15 15:19:06.607949] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:28.669 [2024-07-15 15:19:06.607957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.669 [2024-07-15 15:19:06.607966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:28.669 [2024-07-15 15:19:06.607974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:23:28.669 [2024-07-15 15:19:06.607980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.669 [2024-07-15 15:19:06.608060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.669 [2024-07-15 15:19:06.608069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:28.669 [2024-07-15 15:19:06.608076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:28.669 [2024-07-15 15:19:06.608083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.669 [2024-07-15 15:19:06.608165] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:28.669 [2024-07-15 15:19:06.608175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:28.669 [2024-07-15 15:19:06.608185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:28.669 [2024-07-15 15:19:06.608192] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.669 [2024-07-15 15:19:06.608201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:28.669 [2024-07-15 15:19:06.608207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:28.669 [2024-07-15 15:19:06.608214] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:28.669 [2024-07-15 15:19:06.608223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:28.669 [2024-07-15 15:19:06.608231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:28.669 [2024-07-15 15:19:06.608237] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:28.669 [2024-07-15 15:19:06.608244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:28.669 [2024-07-15 15:19:06.608251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:28.669 [2024-07-15 15:19:06.608257] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:28.669 [2024-07-15 15:19:06.608264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:28.669 [2024-07-15 15:19:06.608271] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:28.669 [2024-07-15 15:19:06.608277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.669 [2024-07-15 15:19:06.608283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:28.669 [2024-07-15 15:19:06.608290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:28.669 [2024-07-15 15:19:06.608296] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.669 [2024-07-15 15:19:06.608303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:28.669 [2024-07-15 15:19:06.608324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:28.669 [2024-07-15 15:19:06.608331] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.669 [2024-07-15 15:19:06.608338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:28.669 [2024-07-15 15:19:06.608344] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:28.669 [2024-07-15 15:19:06.608351] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.669 [2024-07-15 15:19:06.608357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:28.669 [2024-07-15 15:19:06.608364] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:28.669 [2024-07-15 15:19:06.608370] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.669 [2024-07-15 15:19:06.608376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:28.669 [2024-07-15 15:19:06.608382] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:28.669 [2024-07-15 15:19:06.608389] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.669 [2024-07-15 15:19:06.608396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:28.669 [2024-07-15 15:19:06.608403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:28.669 [2024-07-15 15:19:06.608409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:28.669 [2024-07-15 15:19:06.608416] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:28.669 [2024-07-15 15:19:06.608422] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:28.669 [2024-07-15 15:19:06.608429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:28.669 [2024-07-15 15:19:06.608435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:28.669 [2024-07-15 15:19:06.608442] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:28.669 [2024-07-15 15:19:06.608448] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.669 [2024-07-15 15:19:06.608455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:28.669 [2024-07-15 15:19:06.608461] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:28.670 [2024-07-15 15:19:06.608468] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.670 [2024-07-15 15:19:06.608474] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:28.670 [2024-07-15 15:19:06.608481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:28.670 [2024-07-15 15:19:06.608488] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:28.670 [2024-07-15 15:19:06.608495] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.670 [2024-07-15 15:19:06.608502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:28.670 [2024-07-15 15:19:06.608509] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:28.670 [2024-07-15 15:19:06.608516] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:28.670 [2024-07-15 15:19:06.608522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:28.670 [2024-07-15 15:19:06.608528] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:28.670 [2024-07-15 15:19:06.608535] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:28.670 [2024-07-15 15:19:06.608544] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:28.670 [2024-07-15 15:19:06.608553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:28.670 [2024-07-15 15:19:06.608562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:28.670 [2024-07-15 15:19:06.608569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:28.670 [2024-07-15 15:19:06.608577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:28.670 [2024-07-15 15:19:06.608585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:28.670 [2024-07-15 15:19:06.608592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:28.670 [2024-07-15 15:19:06.608600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:28.670 [2024-07-15 15:19:06.608607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:28.670 [2024-07-15 15:19:06.608614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:28.670 [2024-07-15 15:19:06.608622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:28.670 [2024-07-15 15:19:06.608630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:28.670 [2024-07-15 15:19:06.608637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:28.670 [2024-07-15 15:19:06.608644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:28.670 [2024-07-15 15:19:06.608651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:28.670 [2024-07-15 15:19:06.608658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:28.670 [2024-07-15 15:19:06.608665] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:28.670 [2024-07-15 15:19:06.608672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:28.670 [2024-07-15 15:19:06.608681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:28.670 [2024-07-15 15:19:06.608688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:28.670 [2024-07-15 15:19:06.608696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:28.670 [2024-07-15 15:19:06.608703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:28.670 [2024-07-15 15:19:06.608711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.670 [2024-07-15 15:19:06.608790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:28.670 [2024-07-15 15:19:06.608797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:23:28.670 [2024-07-15 15:19:06.608804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.670 [2024-07-15 15:19:06.667094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.670 [2024-07-15 15:19:06.667151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:28.670 [2024-07-15 15:19:06.667166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.353 ms 00:23:28.670 [2024-07-15 15:19:06.667175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.670 [2024-07-15 15:19:06.667302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.670 [2024-07-15 15:19:06.667312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:28.670 [2024-07-15 15:19:06.667322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:28.670 [2024-07-15 15:19:06.667330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.670 [2024-07-15 15:19:06.719138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.670 [2024-07-15 15:19:06.719190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:28.670 [2024-07-15 15:19:06.719203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.832 ms 00:23:28.670 [2024-07-15 15:19:06.719210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.670 [2024-07-15 15:19:06.719266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.670 [2024-07-15 15:19:06.719276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:28.670 [2024-07-15 15:19:06.719284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:28.670 [2024-07-15 15:19:06.719292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.670 [2024-07-15 15:19:06.719788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.670 [2024-07-15 15:19:06.719804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:28.670 [2024-07-15 15:19:06.719813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:23:28.670 [2024-07-15 15:19:06.719820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.670 [2024-07-15 15:19:06.719931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.670 [2024-07-15 15:19:06.719943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:28.670 [2024-07-15 15:19:06.719952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:23:28.670 [2024-07-15 15:19:06.719958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.670 [2024-07-15 15:19:06.741523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.670 [2024-07-15 15:19:06.741568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:28.670 [2024-07-15 15:19:06.741582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.583 ms 00:23:28.670 [2024-07-15 15:19:06.741590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.670 [2024-07-15 15:19:06.762948] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:28.670 [2024-07-15 15:19:06.763005] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:28.670 [2024-07-15 15:19:06.763036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.670 [2024-07-15 15:19:06.763045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:28.670 [2024-07-15 15:19:06.763055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.336 ms 00:23:28.670 [2024-07-15 15:19:06.763063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.929 [2024-07-15 15:19:06.795942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.929 [2024-07-15 15:19:06.796017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:28.929 [2024-07-15 15:19:06.796031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.886 ms 00:23:28.929 [2024-07-15 15:19:06.796064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.929 [2024-07-15 15:19:06.816922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.929 [2024-07-15 15:19:06.816965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:28.929 [2024-07-15 15:19:06.816976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.824 ms 00:23:28.929 [2024-07-15 15:19:06.816983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.929 [2024-07-15 15:19:06.837587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.929 [2024-07-15 15:19:06.837630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:28.929 [2024-07-15 15:19:06.837641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.574 ms 00:23:28.929 [2024-07-15 15:19:06.837649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.929 [2024-07-15 15:19:06.838526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.929 [2024-07-15 15:19:06.838566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:28.929 [2024-07-15 15:19:06.838578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:23:28.929 [2024-07-15 15:19:06.838587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.929 [2024-07-15 15:19:06.932446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.929 [2024-07-15 15:19:06.932510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:28.929 [2024-07-15 15:19:06.932524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.016 ms 00:23:28.929 [2024-07-15 15:19:06.932548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.929 [2024-07-15 15:19:06.944810] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:28.929 [2024-07-15 15:19:06.948074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.929 [2024-07-15 15:19:06.948103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:28.929 [2024-07-15 15:19:06.948115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.486 ms 00:23:28.929 [2024-07-15 15:19:06.948122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.929 [2024-07-15 15:19:06.948209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.929 [2024-07-15 15:19:06.948219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:28.929 [2024-07-15 15:19:06.948227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:28.929 [2024-07-15 15:19:06.948233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.929 [2024-07-15 15:19:06.948297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.929 [2024-07-15 15:19:06.948309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:28.929 [2024-07-15 15:19:06.948316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:28.929 [2024-07-15 15:19:06.948324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.930 [2024-07-15 15:19:06.948341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.930 [2024-07-15 15:19:06.948349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:28.930 [2024-07-15 15:19:06.948356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:28.930 [2024-07-15 15:19:06.948363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.930 [2024-07-15 15:19:06.948391] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:28.930 [2024-07-15 15:19:06.948400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.930 [2024-07-15 15:19:06.948406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:28.930 [2024-07-15 15:19:06.948416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:28.930 [2024-07-15 15:19:06.948423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.930 [2024-07-15 15:19:06.988305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.930 [2024-07-15 15:19:06.988343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:28.930 [2024-07-15 15:19:06.988356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.940 ms 00:23:28.930 [2024-07-15 15:19:06.988364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.930 [2024-07-15 15:19:06.988436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.930 [2024-07-15 15:19:06.988453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:28.930 [2024-07-15 15:19:06.988461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:28.930 [2024-07-15 15:19:06.988468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.930 [2024-07-15 15:19:06.989720] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 418.677 ms, result 0 00:23:57.813  Copying: 34/1024 [MB] (34 MBps) Copying: 70/1024 [MB] (35 MBps) Copying: 106/1024 [MB] (36 MBps) Copying: 141/1024 [MB] (35 MBps) Copying: 177/1024 [MB] (35 MBps) Copying: 212/1024 [MB] (35 MBps) Copying: 247/1024 [MB] (34 MBps) Copying: 282/1024 [MB] (35 MBps) Copying: 318/1024 [MB] (36 MBps) Copying: 353/1024 [MB] (34 MBps) Copying: 389/1024 [MB] (35 MBps) Copying: 425/1024 [MB] (36 MBps) Copying: 461/1024 [MB] (36 MBps) Copying: 497/1024 [MB] (35 MBps) Copying: 530/1024 [MB] (33 MBps) Copying: 566/1024 [MB] (35 MBps) Copying: 600/1024 [MB] (34 MBps) Copying: 637/1024 [MB] (36 MBps) Copying: 674/1024 [MB] (36 MBps) Copying: 711/1024 [MB] (36 MBps) Copying: 748/1024 [MB] (37 MBps) Copying: 785/1024 [MB] (37 MBps) Copying: 823/1024 [MB] (37 MBps) Copying: 860/1024 [MB] (36 MBps) Copying: 897/1024 [MB] (37 MBps) Copying: 936/1024 [MB] (38 MBps) Copying: 974/1024 [MB] (38 MBps) Copying: 1011/1024 [MB] (37 MBps) Copying: 1024/1024 [MB] (average 36 MBps)[2024-07-15 15:19:35.873655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.813 [2024-07-15 15:19:35.873746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:57.813 [2024-07-15 15:19:35.873770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:57.813 [2024-07-15 15:19:35.873783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.813 [2024-07-15 15:19:35.873818] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:57.813 [2024-07-15 15:19:35.880269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.813 [2024-07-15 15:19:35.880331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:57.813 [2024-07-15 15:19:35.880346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.440 ms 00:23:57.813 [2024-07-15 15:19:35.880355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.813 [2024-07-15 15:19:35.880616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.813 [2024-07-15 15:19:35.880627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:57.813 [2024-07-15 15:19:35.880637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:23:57.813 [2024-07-15 15:19:35.880646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.813 [2024-07-15 15:19:35.884445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.813 [2024-07-15 15:19:35.884479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:57.813 [2024-07-15 15:19:35.884491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.790 ms 00:23:57.813 [2024-07-15 15:19:35.884500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.813 [2024-07-15 15:19:35.890684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.813 [2024-07-15 15:19:35.890717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:57.813 [2024-07-15 15:19:35.890731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.170 ms 00:23:57.813 [2024-07-15 15:19:35.890739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.073 [2024-07-15 15:19:35.931500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.073 [2024-07-15 15:19:35.931540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:58.073 [2024-07-15 15:19:35.931553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.760 ms 00:23:58.073 [2024-07-15 15:19:35.931560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.073 [2024-07-15 15:19:35.954084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.073 [2024-07-15 15:19:35.954120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:58.073 [2024-07-15 15:19:35.954131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.524 ms 00:23:58.073 [2024-07-15 15:19:35.954139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.073 [2024-07-15 15:19:35.954271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.073 [2024-07-15 15:19:35.954282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:58.073 [2024-07-15 15:19:35.954291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:58.073 [2024-07-15 15:19:35.954301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.073 [2024-07-15 15:19:35.994156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.073 [2024-07-15 15:19:35.994202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:58.073 [2024-07-15 15:19:35.994215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.915 ms 00:23:58.073 [2024-07-15 15:19:35.994222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.073 [2024-07-15 15:19:36.035424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.073 [2024-07-15 15:19:36.035481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:58.073 [2024-07-15 15:19:36.035495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.238 ms 00:23:58.073 [2024-07-15 15:19:36.035502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.073 [2024-07-15 15:19:36.073832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.073 [2024-07-15 15:19:36.073885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:58.073 [2024-07-15 15:19:36.073914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.352 ms 00:23:58.073 [2024-07-15 15:19:36.073921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.073 [2024-07-15 15:19:36.111313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.073 [2024-07-15 15:19:36.111353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:58.073 [2024-07-15 15:19:36.111364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.375 ms 00:23:58.073 [2024-07-15 15:19:36.111371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.073 [2024-07-15 15:19:36.111424] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:58.073 [2024-07-15 15:19:36.111440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:58.073 [2024-07-15 15:19:36.111723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.111981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:58.074 [2024-07-15 15:19:36.112480] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:58.074 [2024-07-15 15:19:36.112488] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9bb54792-33fc-4be3-bb67-de335ccaec96 00:23:58.074 [2024-07-15 15:19:36.112497] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:58.074 [2024-07-15 15:19:36.112508] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:58.074 [2024-07-15 15:19:36.112526] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:58.074 [2024-07-15 15:19:36.112535] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:58.074 [2024-07-15 15:19:36.112542] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:58.074 [2024-07-15 15:19:36.112550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:58.074 [2024-07-15 15:19:36.112561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:58.074 [2024-07-15 15:19:36.112572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:58.074 [2024-07-15 15:19:36.112583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:58.074 [2024-07-15 15:19:36.112591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.074 [2024-07-15 15:19:36.112602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:58.074 [2024-07-15 15:19:36.112616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.171 ms 00:23:58.074 [2024-07-15 15:19:36.112626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.074 [2024-07-15 15:19:36.133563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.074 [2024-07-15 15:19:36.133599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:58.074 [2024-07-15 15:19:36.133635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.939 ms 00:23:58.074 [2024-07-15 15:19:36.133643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.074 [2024-07-15 15:19:36.134145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.074 [2024-07-15 15:19:36.134154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:58.074 [2024-07-15 15:19:36.134162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:23:58.074 [2024-07-15 15:19:36.134185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.074 [2024-07-15 15:19:36.178858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.074 [2024-07-15 15:19:36.178905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:58.074 [2024-07-15 15:19:36.178917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.074 [2024-07-15 15:19:36.178924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.074 [2024-07-15 15:19:36.178984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.074 [2024-07-15 15:19:36.179005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:58.074 [2024-07-15 15:19:36.179014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.074 [2024-07-15 15:19:36.179022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.075 [2024-07-15 15:19:36.179095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.075 [2024-07-15 15:19:36.179107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:58.075 [2024-07-15 15:19:36.179115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.075 [2024-07-15 15:19:36.179141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.075 [2024-07-15 15:19:36.179174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.075 [2024-07-15 15:19:36.179182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:58.075 [2024-07-15 15:19:36.179190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.075 [2024-07-15 15:19:36.179198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.333 [2024-07-15 15:19:36.299787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.333 [2024-07-15 15:19:36.299844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:58.333 [2024-07-15 15:19:36.299855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.333 [2024-07-15 15:19:36.299863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.333 [2024-07-15 15:19:36.404744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.333 [2024-07-15 15:19:36.404805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:58.333 [2024-07-15 15:19:36.404817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.333 [2024-07-15 15:19:36.404825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.333 [2024-07-15 15:19:36.404890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.333 [2024-07-15 15:19:36.404898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:58.333 [2024-07-15 15:19:36.404912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.333 [2024-07-15 15:19:36.404919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.333 [2024-07-15 15:19:36.404947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.333 [2024-07-15 15:19:36.404955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:58.333 [2024-07-15 15:19:36.404962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.333 [2024-07-15 15:19:36.404969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.333 [2024-07-15 15:19:36.405089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.333 [2024-07-15 15:19:36.405120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:58.333 [2024-07-15 15:19:36.405131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.333 [2024-07-15 15:19:36.405138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.333 [2024-07-15 15:19:36.405190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.333 [2024-07-15 15:19:36.405199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:58.333 [2024-07-15 15:19:36.405207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.333 [2024-07-15 15:19:36.405213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.333 [2024-07-15 15:19:36.405248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.333 [2024-07-15 15:19:36.405257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:58.333 [2024-07-15 15:19:36.405265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.333 [2024-07-15 15:19:36.405275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.333 [2024-07-15 15:19:36.405315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.333 [2024-07-15 15:19:36.405324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:58.333 [2024-07-15 15:19:36.405331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.333 [2024-07-15 15:19:36.405338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.333 [2024-07-15 15:19:36.405457] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.817 ms, result 0 00:23:59.757 00:23:59.757 00:23:59.757 15:19:37 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:01.677 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:01.677 15:19:39 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:01.677 [2024-07-15 15:19:39.438408] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:24:01.677 [2024-07-15 15:19:39.438535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83324 ] 00:24:01.677 [2024-07-15 15:19:39.600633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.935 [2024-07-15 15:19:39.841480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.193 [2024-07-15 15:19:40.266149] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:02.193 [2024-07-15 15:19:40.266215] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:02.453 [2024-07-15 15:19:40.425105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.453 [2024-07-15 15:19:40.425158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:02.453 [2024-07-15 15:19:40.425171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:02.453 [2024-07-15 15:19:40.425180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.453 [2024-07-15 15:19:40.425239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.453 [2024-07-15 15:19:40.425251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:02.453 [2024-07-15 15:19:40.425259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:02.453 [2024-07-15 15:19:40.425269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.453 [2024-07-15 15:19:40.425289] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:02.453 [2024-07-15 15:19:40.426450] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:02.453 [2024-07-15 15:19:40.426487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.453 [2024-07-15 15:19:40.426500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:02.453 [2024-07-15 15:19:40.426517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.204 ms 00:24:02.453 [2024-07-15 15:19:40.426525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.453 [2024-07-15 15:19:40.428052] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:02.453 [2024-07-15 15:19:40.451103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.453 [2024-07-15 15:19:40.451146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:02.453 [2024-07-15 15:19:40.451159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.096 ms 00:24:02.453 [2024-07-15 15:19:40.451168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.453 [2024-07-15 15:19:40.451239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.453 [2024-07-15 15:19:40.451251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:02.453 [2024-07-15 15:19:40.451263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:02.453 [2024-07-15 15:19:40.451272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.453 [2024-07-15 15:19:40.458421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.453 [2024-07-15 15:19:40.458452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:02.453 [2024-07-15 15:19:40.458462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.092 ms 00:24:02.453 [2024-07-15 15:19:40.458471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.453 [2024-07-15 15:19:40.458573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.453 [2024-07-15 15:19:40.458588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:02.453 [2024-07-15 15:19:40.458596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:02.453 [2024-07-15 15:19:40.458605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.453 [2024-07-15 15:19:40.458655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.453 [2024-07-15 15:19:40.458665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:02.453 [2024-07-15 15:19:40.458675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:02.453 [2024-07-15 15:19:40.458682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.453 [2024-07-15 15:19:40.458709] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:02.453 [2024-07-15 15:19:40.464503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.453 [2024-07-15 15:19:40.464533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:02.453 [2024-07-15 15:19:40.464542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.813 ms 00:24:02.453 [2024-07-15 15:19:40.464561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.453 [2024-07-15 15:19:40.464602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.453 [2024-07-15 15:19:40.464612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:02.453 [2024-07-15 15:19:40.464620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:02.453 [2024-07-15 15:19:40.464644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.453 [2024-07-15 15:19:40.464694] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:02.453 [2024-07-15 15:19:40.464717] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:02.453 [2024-07-15 15:19:40.464754] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:02.453 [2024-07-15 15:19:40.464772] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:02.453 [2024-07-15 15:19:40.464867] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:02.453 [2024-07-15 15:19:40.464891] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:02.453 [2024-07-15 15:19:40.464903] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:02.453 [2024-07-15 15:19:40.464914] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:02.453 [2024-07-15 15:19:40.464924] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:02.453 [2024-07-15 15:19:40.464933] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:02.453 [2024-07-15 15:19:40.464941] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:02.453 [2024-07-15 15:19:40.464949] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:02.453 [2024-07-15 15:19:40.464958] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:02.453 [2024-07-15 15:19:40.464967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.453 [2024-07-15 15:19:40.464978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:02.453 [2024-07-15 15:19:40.464987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:24:02.453 [2024-07-15 15:19:40.464995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.453 [2024-07-15 15:19:40.465085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.453 [2024-07-15 15:19:40.465097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:02.453 [2024-07-15 15:19:40.465109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:02.453 [2024-07-15 15:19:40.465120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.453 [2024-07-15 15:19:40.465222] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:02.453 [2024-07-15 15:19:40.465239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:02.453 [2024-07-15 15:19:40.465253] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:02.453 [2024-07-15 15:19:40.465262] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.453 [2024-07-15 15:19:40.465270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:02.453 [2024-07-15 15:19:40.465278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:02.453 [2024-07-15 15:19:40.465285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:02.453 [2024-07-15 15:19:40.465293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:02.453 [2024-07-15 15:19:40.465301] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:02.453 [2024-07-15 15:19:40.465310] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:02.453 [2024-07-15 15:19:40.465318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:02.453 [2024-07-15 15:19:40.465326] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:02.453 [2024-07-15 15:19:40.465333] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:02.453 [2024-07-15 15:19:40.465341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:02.453 [2024-07-15 15:19:40.465348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:02.453 [2024-07-15 15:19:40.465356] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.453 [2024-07-15 15:19:40.465363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:02.453 [2024-07-15 15:19:40.465370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:02.453 [2024-07-15 15:19:40.465378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.453 [2024-07-15 15:19:40.465385] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:02.453 [2024-07-15 15:19:40.465407] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:02.453 [2024-07-15 15:19:40.465415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.453 [2024-07-15 15:19:40.465423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:02.453 [2024-07-15 15:19:40.465431] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:02.453 [2024-07-15 15:19:40.465438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.453 [2024-07-15 15:19:40.465445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:02.453 [2024-07-15 15:19:40.465452] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:02.453 [2024-07-15 15:19:40.465459] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.453 [2024-07-15 15:19:40.465467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:02.453 [2024-07-15 15:19:40.465474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:02.453 [2024-07-15 15:19:40.465482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.453 [2024-07-15 15:19:40.465489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:02.453 [2024-07-15 15:19:40.465497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:02.453 [2024-07-15 15:19:40.465504] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:02.453 [2024-07-15 15:19:40.465511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:02.454 [2024-07-15 15:19:40.465518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:02.454 [2024-07-15 15:19:40.465525] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:02.454 [2024-07-15 15:19:40.465533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:02.454 [2024-07-15 15:19:40.465541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:02.454 [2024-07-15 15:19:40.465548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.454 [2024-07-15 15:19:40.465556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:02.454 [2024-07-15 15:19:40.465564] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:02.454 [2024-07-15 15:19:40.465571] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.454 [2024-07-15 15:19:40.465579] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:02.454 [2024-07-15 15:19:40.465587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:02.454 [2024-07-15 15:19:40.465595] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:02.454 [2024-07-15 15:19:40.465602] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.454 [2024-07-15 15:19:40.465611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:02.454 [2024-07-15 15:19:40.465618] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:02.454 [2024-07-15 15:19:40.465626] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:02.454 [2024-07-15 15:19:40.465633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:02.454 [2024-07-15 15:19:40.465641] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:02.454 [2024-07-15 15:19:40.465648] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:02.454 [2024-07-15 15:19:40.465659] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:02.454 [2024-07-15 15:19:40.465669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:02.454 [2024-07-15 15:19:40.465679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:02.454 [2024-07-15 15:19:40.465687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:02.454 [2024-07-15 15:19:40.465695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:02.454 [2024-07-15 15:19:40.465703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:02.454 [2024-07-15 15:19:40.465712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:02.454 [2024-07-15 15:19:40.465720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:02.454 [2024-07-15 15:19:40.465727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:02.454 [2024-07-15 15:19:40.465736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:02.454 [2024-07-15 15:19:40.465744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:02.454 [2024-07-15 15:19:40.465752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:02.454 [2024-07-15 15:19:40.465760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:02.454 [2024-07-15 15:19:40.465768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:02.454 [2024-07-15 15:19:40.465776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:02.454 [2024-07-15 15:19:40.465784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:02.454 [2024-07-15 15:19:40.465791] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:02.454 [2024-07-15 15:19:40.465800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:02.454 [2024-07-15 15:19:40.465808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:02.454 [2024-07-15 15:19:40.465816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:02.454 [2024-07-15 15:19:40.465824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:02.454 [2024-07-15 15:19:40.465832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:02.454 [2024-07-15 15:19:40.465841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.454 [2024-07-15 15:19:40.465852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:02.454 [2024-07-15 15:19:40.465861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:24:02.454 [2024-07-15 15:19:40.465869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.454 [2024-07-15 15:19:40.529339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.454 [2024-07-15 15:19:40.529390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:02.454 [2024-07-15 15:19:40.529402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.535 ms 00:24:02.454 [2024-07-15 15:19:40.529410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.454 [2024-07-15 15:19:40.529511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.454 [2024-07-15 15:19:40.529520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:02.454 [2024-07-15 15:19:40.529528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:02.454 [2024-07-15 15:19:40.529535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.584491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.584535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:02.713 [2024-07-15 15:19:40.584549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.994 ms 00:24:02.713 [2024-07-15 15:19:40.584557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.584614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.584624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:02.713 [2024-07-15 15:19:40.584634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:02.713 [2024-07-15 15:19:40.584642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.585131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.585145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:02.713 [2024-07-15 15:19:40.585154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:24:02.713 [2024-07-15 15:19:40.585161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.585285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.585308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:02.713 [2024-07-15 15:19:40.585321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:24:02.713 [2024-07-15 15:19:40.585329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.606611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.606652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:02.713 [2024-07-15 15:19:40.606665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.292 ms 00:24:02.713 [2024-07-15 15:19:40.606672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.628793] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:02.713 [2024-07-15 15:19:40.628834] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:02.713 [2024-07-15 15:19:40.628846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.628855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:02.713 [2024-07-15 15:19:40.628865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.088 ms 00:24:02.713 [2024-07-15 15:19:40.628872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.660198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.660243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:02.713 [2024-07-15 15:19:40.660255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.338 ms 00:24:02.713 [2024-07-15 15:19:40.660269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.681319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.681358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:02.713 [2024-07-15 15:19:40.681370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.044 ms 00:24:02.713 [2024-07-15 15:19:40.681378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.702050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.702088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:02.713 [2024-07-15 15:19:40.702100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.671 ms 00:24:02.713 [2024-07-15 15:19:40.702107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.703043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.703080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:02.713 [2024-07-15 15:19:40.703091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:24:02.713 [2024-07-15 15:19:40.703098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.799431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.799496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:02.713 [2024-07-15 15:19:40.799511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.493 ms 00:24:02.713 [2024-07-15 15:19:40.799536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.814107] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:02.713 [2024-07-15 15:19:40.817543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.817580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:02.713 [2024-07-15 15:19:40.817593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.958 ms 00:24:02.713 [2024-07-15 15:19:40.817601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.817697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.817708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:02.713 [2024-07-15 15:19:40.817717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:02.713 [2024-07-15 15:19:40.817724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.817789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.817802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:02.713 [2024-07-15 15:19:40.817810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:02.713 [2024-07-15 15:19:40.817817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.817837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.817845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:02.713 [2024-07-15 15:19:40.817853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:02.713 [2024-07-15 15:19:40.817860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.713 [2024-07-15 15:19:40.817889] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:02.713 [2024-07-15 15:19:40.817899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.713 [2024-07-15 15:19:40.817906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:02.713 [2024-07-15 15:19:40.817915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:02.713 [2024-07-15 15:19:40.817923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.972 [2024-07-15 15:19:40.862178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.973 [2024-07-15 15:19:40.862247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:02.973 [2024-07-15 15:19:40.862260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.319 ms 00:24:02.973 [2024-07-15 15:19:40.862269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.973 [2024-07-15 15:19:40.862361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.973 [2024-07-15 15:19:40.862378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:02.973 [2024-07-15 15:19:40.862386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:02.973 [2024-07-15 15:19:40.862393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.973 [2024-07-15 15:19:40.863692] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 438.921 ms, result 0 00:24:35.971  Copying: 33/1024 [MB] (33 MBps) Copying: 66/1024 [MB] (33 MBps) Copying: 100/1024 [MB] (33 MBps) Copying: 134/1024 [MB] (33 MBps) Copying: 164/1024 [MB] (29 MBps) Copying: 196/1024 [MB] (32 MBps) Copying: 228/1024 [MB] (32 MBps) Copying: 261/1024 [MB] (32 MBps) Copying: 295/1024 [MB] (33 MBps) Copying: 327/1024 [MB] (32 MBps) Copying: 360/1024 [MB] (33 MBps) Copying: 392/1024 [MB] (32 MBps) Copying: 424/1024 [MB] (32 MBps) Copying: 456/1024 [MB] (31 MBps) Copying: 489/1024 [MB] (33 MBps) Copying: 521/1024 [MB] (31 MBps) Copying: 553/1024 [MB] (31 MBps) Copying: 584/1024 [MB] (31 MBps) Copying: 616/1024 [MB] (31 MBps) Copying: 647/1024 [MB] (30 MBps) Copying: 678/1024 [MB] (31 MBps) Copying: 709/1024 [MB] (31 MBps) Copying: 740/1024 [MB] (31 MBps) Copying: 771/1024 [MB] (30 MBps) Copying: 801/1024 [MB] (30 MBps) Copying: 832/1024 [MB] (30 MBps) Copying: 863/1024 [MB] (31 MBps) Copying: 894/1024 [MB] (31 MBps) Copying: 925/1024 [MB] (31 MBps) Copying: 957/1024 [MB] (31 MBps) Copying: 988/1024 [MB] (31 MBps) Copying: 1019/1024 [MB] (31 MBps) Copying: 1048492/1048576 [kB] (4796 kBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-15 15:20:13.914590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.971 [2024-07-15 15:20:13.914726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:35.971 [2024-07-15 15:20:13.914764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:35.971 [2024-07-15 15:20:13.914777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.971 [2024-07-15 15:20:13.916618] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:35.971 [2024-07-15 15:20:13.921516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.971 [2024-07-15 15:20:13.921551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:35.971 [2024-07-15 15:20:13.921562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.840 ms 00:24:35.971 [2024-07-15 15:20:13.921570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.971 [2024-07-15 15:20:13.933528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.971 [2024-07-15 15:20:13.933564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:35.971 [2024-07-15 15:20:13.933576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.998 ms 00:24:35.971 [2024-07-15 15:20:13.933583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.971 [2024-07-15 15:20:13.958311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.971 [2024-07-15 15:20:13.958398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:35.971 [2024-07-15 15:20:13.958427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.757 ms 00:24:35.971 [2024-07-15 15:20:13.958437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.971 [2024-07-15 15:20:13.963723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.971 [2024-07-15 15:20:13.963753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:35.971 [2024-07-15 15:20:13.963763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.258 ms 00:24:35.971 [2024-07-15 15:20:13.963772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.971 [2024-07-15 15:20:14.003939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.971 [2024-07-15 15:20:14.004026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:35.971 [2024-07-15 15:20:14.004042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.177 ms 00:24:35.971 [2024-07-15 15:20:14.004049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.971 [2024-07-15 15:20:14.027421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.971 [2024-07-15 15:20:14.027470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:35.971 [2024-07-15 15:20:14.027485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.332 ms 00:24:35.971 [2024-07-15 15:20:14.027501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.231 [2024-07-15 15:20:14.125695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.231 [2024-07-15 15:20:14.125793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:36.231 [2024-07-15 15:20:14.125812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.327 ms 00:24:36.231 [2024-07-15 15:20:14.125820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.231 [2024-07-15 15:20:14.167325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.231 [2024-07-15 15:20:14.167384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:36.231 [2024-07-15 15:20:14.167398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.559 ms 00:24:36.231 [2024-07-15 15:20:14.167404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.231 [2024-07-15 15:20:14.208082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.231 [2024-07-15 15:20:14.208143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:36.231 [2024-07-15 15:20:14.208155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.705 ms 00:24:36.231 [2024-07-15 15:20:14.208163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.231 [2024-07-15 15:20:14.249846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.231 [2024-07-15 15:20:14.249909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:36.231 [2024-07-15 15:20:14.249970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.711 ms 00:24:36.231 [2024-07-15 15:20:14.249978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.231 [2024-07-15 15:20:14.293198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.231 [2024-07-15 15:20:14.293254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:36.231 [2024-07-15 15:20:14.293267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.187 ms 00:24:36.231 [2024-07-15 15:20:14.293275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.231 [2024-07-15 15:20:14.293316] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:36.231 [2024-07-15 15:20:14.293331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 119552 / 261120 wr_cnt: 1 state: open 00:24:36.231 [2024-07-15 15:20:14.293342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:36.231 [2024-07-15 15:20:14.293827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.293997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:36.232 [2024-07-15 15:20:14.294141] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:36.232 [2024-07-15 15:20:14.294148] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9bb54792-33fc-4be3-bb67-de335ccaec96 00:24:36.232 [2024-07-15 15:20:14.294156] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 119552 00:24:36.232 [2024-07-15 15:20:14.294163] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 120512 00:24:36.232 [2024-07-15 15:20:14.294170] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 119552 00:24:36.232 [2024-07-15 15:20:14.294177] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0080 00:24:36.232 [2024-07-15 15:20:14.294184] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:36.232 [2024-07-15 15:20:14.294195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:36.232 [2024-07-15 15:20:14.294202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:36.232 [2024-07-15 15:20:14.294208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:36.232 [2024-07-15 15:20:14.294214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:36.232 [2024-07-15 15:20:14.294221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.232 [2024-07-15 15:20:14.294236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:36.232 [2024-07-15 15:20:14.294244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:24:36.232 [2024-07-15 15:20:14.294250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.232 [2024-07-15 15:20:14.315747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.232 [2024-07-15 15:20:14.315803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:36.232 [2024-07-15 15:20:14.315843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.500 ms 00:24:36.232 [2024-07-15 15:20:14.315851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.232 [2024-07-15 15:20:14.316407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.232 [2024-07-15 15:20:14.316421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:36.232 [2024-07-15 15:20:14.316429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:24:36.232 [2024-07-15 15:20:14.316436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.491 [2024-07-15 15:20:14.363556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.491 [2024-07-15 15:20:14.363615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:36.491 [2024-07-15 15:20:14.363628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.491 [2024-07-15 15:20:14.363652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.491 [2024-07-15 15:20:14.363730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.491 [2024-07-15 15:20:14.363750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:36.491 [2024-07-15 15:20:14.363757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.491 [2024-07-15 15:20:14.363764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.491 [2024-07-15 15:20:14.363832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.491 [2024-07-15 15:20:14.363843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:36.491 [2024-07-15 15:20:14.363851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.491 [2024-07-15 15:20:14.363857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.491 [2024-07-15 15:20:14.363875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.491 [2024-07-15 15:20:14.363883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:36.491 [2024-07-15 15:20:14.363891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.491 [2024-07-15 15:20:14.363898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.491 [2024-07-15 15:20:14.486373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.491 [2024-07-15 15:20:14.486435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:36.491 [2024-07-15 15:20:14.486448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.491 [2024-07-15 15:20:14.486471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.491 [2024-07-15 15:20:14.597647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.491 [2024-07-15 15:20:14.597702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:36.491 [2024-07-15 15:20:14.597714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.491 [2024-07-15 15:20:14.597722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.491 [2024-07-15 15:20:14.597784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.491 [2024-07-15 15:20:14.597793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:36.491 [2024-07-15 15:20:14.597800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.491 [2024-07-15 15:20:14.597807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.491 [2024-07-15 15:20:14.597855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.491 [2024-07-15 15:20:14.597871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:36.492 [2024-07-15 15:20:14.597879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.492 [2024-07-15 15:20:14.597886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.492 [2024-07-15 15:20:14.597982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.492 [2024-07-15 15:20:14.597993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:36.492 [2024-07-15 15:20:14.598000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.492 [2024-07-15 15:20:14.598023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.492 [2024-07-15 15:20:14.598057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.492 [2024-07-15 15:20:14.598067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:36.492 [2024-07-15 15:20:14.598079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.492 [2024-07-15 15:20:14.598086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.492 [2024-07-15 15:20:14.598122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.492 [2024-07-15 15:20:14.598131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:36.492 [2024-07-15 15:20:14.598138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.492 [2024-07-15 15:20:14.598145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.492 [2024-07-15 15:20:14.598187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.492 [2024-07-15 15:20:14.598198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:36.492 [2024-07-15 15:20:14.598206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.492 [2024-07-15 15:20:14.598215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.492 [2024-07-15 15:20:14.598329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 688.533 ms, result 0 00:24:39.062 00:24:39.062 00:24:39.062 15:20:17 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:39.320 [2024-07-15 15:20:17.184454] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:24:39.320 [2024-07-15 15:20:17.184566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83704 ] 00:24:39.320 [2024-07-15 15:20:17.346667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.578 [2024-07-15 15:20:17.591731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.145 [2024-07-15 15:20:17.998706] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:40.145 [2024-07-15 15:20:17.998768] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:40.145 [2024-07-15 15:20:18.155291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.145 [2024-07-15 15:20:18.155357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:40.145 [2024-07-15 15:20:18.155372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:40.145 [2024-07-15 15:20:18.155382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.145 [2024-07-15 15:20:18.155444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.145 [2024-07-15 15:20:18.155457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:40.145 [2024-07-15 15:20:18.155466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:40.145 [2024-07-15 15:20:18.155477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.145 [2024-07-15 15:20:18.155500] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:40.145 [2024-07-15 15:20:18.156739] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:40.145 [2024-07-15 15:20:18.156768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.145 [2024-07-15 15:20:18.156780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:40.145 [2024-07-15 15:20:18.156788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:24:40.145 [2024-07-15 15:20:18.156795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.145 [2024-07-15 15:20:18.158236] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:40.145 [2024-07-15 15:20:18.178489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.145 [2024-07-15 15:20:18.178533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:40.145 [2024-07-15 15:20:18.178564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.292 ms 00:24:40.145 [2024-07-15 15:20:18.178572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.145 [2024-07-15 15:20:18.178718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.145 [2024-07-15 15:20:18.178729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:40.145 [2024-07-15 15:20:18.178742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:24:40.145 [2024-07-15 15:20:18.178749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.145 [2024-07-15 15:20:18.185747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.145 [2024-07-15 15:20:18.185780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:40.145 [2024-07-15 15:20:18.185791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.944 ms 00:24:40.145 [2024-07-15 15:20:18.185799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.145 [2024-07-15 15:20:18.185880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.145 [2024-07-15 15:20:18.185908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:40.145 [2024-07-15 15:20:18.185915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:40.145 [2024-07-15 15:20:18.185922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.145 [2024-07-15 15:20:18.185967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.145 [2024-07-15 15:20:18.185976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:40.145 [2024-07-15 15:20:18.185984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:40.145 [2024-07-15 15:20:18.185991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.145 [2024-07-15 15:20:18.186029] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:40.145 [2024-07-15 15:20:18.191661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.145 [2024-07-15 15:20:18.191690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:40.145 [2024-07-15 15:20:18.191700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.649 ms 00:24:40.145 [2024-07-15 15:20:18.191707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.145 [2024-07-15 15:20:18.191741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.145 [2024-07-15 15:20:18.191750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:40.145 [2024-07-15 15:20:18.191758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:40.145 [2024-07-15 15:20:18.191764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.145 [2024-07-15 15:20:18.191809] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:40.145 [2024-07-15 15:20:18.191830] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:40.145 [2024-07-15 15:20:18.191870] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:40.145 [2024-07-15 15:20:18.191887] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:40.145 [2024-07-15 15:20:18.191971] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:40.145 [2024-07-15 15:20:18.191981] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:40.145 [2024-07-15 15:20:18.192005] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:40.145 [2024-07-15 15:20:18.192016] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:40.145 [2024-07-15 15:20:18.192024] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:40.145 [2024-07-15 15:20:18.192033] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:40.145 [2024-07-15 15:20:18.192040] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:40.145 [2024-07-15 15:20:18.192047] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:40.145 [2024-07-15 15:20:18.192054] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:40.145 [2024-07-15 15:20:18.192062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.145 [2024-07-15 15:20:18.192072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:40.145 [2024-07-15 15:20:18.192080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:24:40.145 [2024-07-15 15:20:18.192086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.145 [2024-07-15 15:20:18.192155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.145 [2024-07-15 15:20:18.192163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:40.145 [2024-07-15 15:20:18.192170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:40.145 [2024-07-15 15:20:18.192177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.145 [2024-07-15 15:20:18.192257] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:40.145 [2024-07-15 15:20:18.192266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:40.145 [2024-07-15 15:20:18.192278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:40.145 [2024-07-15 15:20:18.192285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.145 [2024-07-15 15:20:18.192293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:40.145 [2024-07-15 15:20:18.192300] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:40.145 [2024-07-15 15:20:18.192307] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:40.145 [2024-07-15 15:20:18.192314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:40.145 [2024-07-15 15:20:18.192321] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:40.145 [2024-07-15 15:20:18.192327] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:40.145 [2024-07-15 15:20:18.192334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:40.145 [2024-07-15 15:20:18.192341] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:40.145 [2024-07-15 15:20:18.192348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:40.145 [2024-07-15 15:20:18.192354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:40.145 [2024-07-15 15:20:18.192362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:40.145 [2024-07-15 15:20:18.192368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.145 [2024-07-15 15:20:18.192375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:40.145 [2024-07-15 15:20:18.192382] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:40.145 [2024-07-15 15:20:18.192388] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.145 [2024-07-15 15:20:18.192395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:40.145 [2024-07-15 15:20:18.192413] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:40.145 [2024-07-15 15:20:18.192420] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:40.145 [2024-07-15 15:20:18.192427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:40.145 [2024-07-15 15:20:18.192433] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:40.145 [2024-07-15 15:20:18.192440] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:40.145 [2024-07-15 15:20:18.192446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:40.145 [2024-07-15 15:20:18.192453] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:40.145 [2024-07-15 15:20:18.192460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:40.145 [2024-07-15 15:20:18.192466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:40.145 [2024-07-15 15:20:18.192473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:40.145 [2024-07-15 15:20:18.192480] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:40.145 [2024-07-15 15:20:18.192486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:40.145 [2024-07-15 15:20:18.192493] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:40.145 [2024-07-15 15:20:18.192499] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:40.145 [2024-07-15 15:20:18.192505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:40.145 [2024-07-15 15:20:18.192512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:40.145 [2024-07-15 15:20:18.192519] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:40.145 [2024-07-15 15:20:18.192556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:40.145 [2024-07-15 15:20:18.192563] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:40.145 [2024-07-15 15:20:18.192570] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.145 [2024-07-15 15:20:18.192576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:40.145 [2024-07-15 15:20:18.192583] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:40.145 [2024-07-15 15:20:18.192590] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.145 [2024-07-15 15:20:18.192596] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:40.146 [2024-07-15 15:20:18.192603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:40.146 [2024-07-15 15:20:18.192610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:40.146 [2024-07-15 15:20:18.192617] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.146 [2024-07-15 15:20:18.192624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:40.146 [2024-07-15 15:20:18.192631] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:40.146 [2024-07-15 15:20:18.192637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:40.146 [2024-07-15 15:20:18.192644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:40.146 [2024-07-15 15:20:18.192650] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:40.146 [2024-07-15 15:20:18.192657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:40.146 [2024-07-15 15:20:18.192666] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:40.146 [2024-07-15 15:20:18.192674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:40.146 [2024-07-15 15:20:18.192682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:40.146 [2024-07-15 15:20:18.192690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:40.146 [2024-07-15 15:20:18.192696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:40.146 [2024-07-15 15:20:18.192704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:40.146 [2024-07-15 15:20:18.192711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:40.146 [2024-07-15 15:20:18.192718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:40.146 [2024-07-15 15:20:18.192725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:40.146 [2024-07-15 15:20:18.192732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:40.146 [2024-07-15 15:20:18.192739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:40.146 [2024-07-15 15:20:18.192747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:40.146 [2024-07-15 15:20:18.192753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:40.146 [2024-07-15 15:20:18.192760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:40.146 [2024-07-15 15:20:18.192767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:40.146 [2024-07-15 15:20:18.192774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:40.146 [2024-07-15 15:20:18.192781] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:40.146 [2024-07-15 15:20:18.192788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:40.146 [2024-07-15 15:20:18.192796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:40.146 [2024-07-15 15:20:18.192803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:40.146 [2024-07-15 15:20:18.192810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:40.146 [2024-07-15 15:20:18.192817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:40.146 [2024-07-15 15:20:18.192825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.146 [2024-07-15 15:20:18.192835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:40.146 [2024-07-15 15:20:18.192842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:24:40.146 [2024-07-15 15:20:18.192849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.146 [2024-07-15 15:20:18.252913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.146 [2024-07-15 15:20:18.252962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:40.146 [2024-07-15 15:20:18.252974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.131 ms 00:24:40.146 [2024-07-15 15:20:18.252982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.146 [2024-07-15 15:20:18.253087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.146 [2024-07-15 15:20:18.253096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:40.146 [2024-07-15 15:20:18.253105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:40.146 [2024-07-15 15:20:18.253112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.405 [2024-07-15 15:20:18.305768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.405 [2024-07-15 15:20:18.305811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:40.405 [2024-07-15 15:20:18.305822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.691 ms 00:24:40.405 [2024-07-15 15:20:18.305845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.405 [2024-07-15 15:20:18.305900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.405 [2024-07-15 15:20:18.305909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:40.405 [2024-07-15 15:20:18.305917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:40.405 [2024-07-15 15:20:18.305924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.405 [2024-07-15 15:20:18.306423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.405 [2024-07-15 15:20:18.306439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:40.405 [2024-07-15 15:20:18.306447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:24:40.405 [2024-07-15 15:20:18.306454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.405 [2024-07-15 15:20:18.306586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.405 [2024-07-15 15:20:18.306599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:40.405 [2024-07-15 15:20:18.306608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:24:40.405 [2024-07-15 15:20:18.306616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.405 [2024-07-15 15:20:18.328108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.405 [2024-07-15 15:20:18.328149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:40.405 [2024-07-15 15:20:18.328162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.508 ms 00:24:40.405 [2024-07-15 15:20:18.328169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.405 [2024-07-15 15:20:18.350586] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:40.405 [2024-07-15 15:20:18.350633] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:40.405 [2024-07-15 15:20:18.350647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.405 [2024-07-15 15:20:18.350656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:40.405 [2024-07-15 15:20:18.350667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.397 ms 00:24:40.405 [2024-07-15 15:20:18.350675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.405 [2024-07-15 15:20:18.384557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.405 [2024-07-15 15:20:18.384601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:40.405 [2024-07-15 15:20:18.384614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.889 ms 00:24:40.405 [2024-07-15 15:20:18.384625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.405 [2024-07-15 15:20:18.405113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.405 [2024-07-15 15:20:18.405151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:40.405 [2024-07-15 15:20:18.405163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.472 ms 00:24:40.405 [2024-07-15 15:20:18.405171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.405 [2024-07-15 15:20:18.424687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.405 [2024-07-15 15:20:18.424718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:40.405 [2024-07-15 15:20:18.424727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.514 ms 00:24:40.405 [2024-07-15 15:20:18.424734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.405 [2024-07-15 15:20:18.425651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.405 [2024-07-15 15:20:18.425682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:40.405 [2024-07-15 15:20:18.425692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:24:40.405 [2024-07-15 15:20:18.425700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.663 [2024-07-15 15:20:18.519969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.663 [2024-07-15 15:20:18.520053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:40.663 [2024-07-15 15:20:18.520067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.428 ms 00:24:40.663 [2024-07-15 15:20:18.520075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.663 [2024-07-15 15:20:18.534637] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:40.663 [2024-07-15 15:20:18.537969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.663 [2024-07-15 15:20:18.538016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:40.663 [2024-07-15 15:20:18.538029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.848 ms 00:24:40.663 [2024-07-15 15:20:18.538036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.663 [2024-07-15 15:20:18.538134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.663 [2024-07-15 15:20:18.538145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:40.663 [2024-07-15 15:20:18.538153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:40.663 [2024-07-15 15:20:18.538160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.663 [2024-07-15 15:20:18.539791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.663 [2024-07-15 15:20:18.539828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:40.663 [2024-07-15 15:20:18.539838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:24:40.663 [2024-07-15 15:20:18.539846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.663 [2024-07-15 15:20:18.539876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.663 [2024-07-15 15:20:18.539884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:40.663 [2024-07-15 15:20:18.539891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:40.663 [2024-07-15 15:20:18.539899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.663 [2024-07-15 15:20:18.539938] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:40.663 [2024-07-15 15:20:18.539947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.663 [2024-07-15 15:20:18.539954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:40.663 [2024-07-15 15:20:18.539964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:40.663 [2024-07-15 15:20:18.539972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.663 [2024-07-15 15:20:18.580495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.663 [2024-07-15 15:20:18.580554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:40.663 [2024-07-15 15:20:18.580569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.581 ms 00:24:40.663 [2024-07-15 15:20:18.580577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.663 [2024-07-15 15:20:18.580671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.663 [2024-07-15 15:20:18.580690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:40.663 [2024-07-15 15:20:18.580699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:40.663 [2024-07-15 15:20:18.580707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.663 [2024-07-15 15:20:18.587848] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 430.426 ms, result 0 00:25:09.561  Copying: 32/1024 [MB] (32 MBps) Copying: 68/1024 [MB] (35 MBps) Copying: 104/1024 [MB] (35 MBps) Copying: 140/1024 [MB] (36 MBps) Copying: 175/1024 [MB] (35 MBps) Copying: 211/1024 [MB] (35 MBps) Copying: 248/1024 [MB] (36 MBps) Copying: 283/1024 [MB] (35 MBps) Copying: 319/1024 [MB] (35 MBps) Copying: 355/1024 [MB] (35 MBps) Copying: 390/1024 [MB] (35 MBps) Copying: 426/1024 [MB] (36 MBps) Copying: 462/1024 [MB] (35 MBps) Copying: 499/1024 [MB] (36 MBps) Copying: 535/1024 [MB] (36 MBps) Copying: 571/1024 [MB] (35 MBps) Copying: 607/1024 [MB] (36 MBps) Copying: 642/1024 [MB] (34 MBps) Copying: 679/1024 [MB] (36 MBps) Copying: 714/1024 [MB] (35 MBps) Copying: 749/1024 [MB] (35 MBps) Copying: 783/1024 [MB] (33 MBps) Copying: 819/1024 [MB] (35 MBps) Copying: 855/1024 [MB] (36 MBps) Copying: 892/1024 [MB] (36 MBps) Copying: 929/1024 [MB] (36 MBps) Copying: 965/1024 [MB] (36 MBps) Copying: 1001/1024 [MB] (36 MBps) Copying: 1024/1024 [MB] (average 35 MBps)[2024-07-15 15:20:47.605103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.561 [2024-07-15 15:20:47.605185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:09.561 [2024-07-15 15:20:47.605205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:09.561 [2024-07-15 15:20:47.605218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.561 [2024-07-15 15:20:47.605250] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:09.561 [2024-07-15 15:20:47.611114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.561 [2024-07-15 15:20:47.611149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:09.561 [2024-07-15 15:20:47.611161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.855 ms 00:25:09.561 [2024-07-15 15:20:47.611169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.561 [2024-07-15 15:20:47.611396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.561 [2024-07-15 15:20:47.611406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:09.561 [2024-07-15 15:20:47.611415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:25:09.561 [2024-07-15 15:20:47.611423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.561 [2024-07-15 15:20:47.615774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.561 [2024-07-15 15:20:47.615817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:09.561 [2024-07-15 15:20:47.615836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.342 ms 00:25:09.561 [2024-07-15 15:20:47.615845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.561 [2024-07-15 15:20:47.623088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.561 [2024-07-15 15:20:47.623124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:09.561 [2024-07-15 15:20:47.623135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.216 ms 00:25:09.561 [2024-07-15 15:20:47.623143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.561 [2024-07-15 15:20:47.658037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.561 [2024-07-15 15:20:47.658118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:09.561 [2024-07-15 15:20:47.658139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.903 ms 00:25:09.561 [2024-07-15 15:20:47.658151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.819 [2024-07-15 15:20:47.681173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.819 [2024-07-15 15:20:47.681230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:09.819 [2024-07-15 15:20:47.681243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.985 ms 00:25:09.819 [2024-07-15 15:20:47.681261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.819 [2024-07-15 15:20:47.784603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.819 [2024-07-15 15:20:47.784663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:09.819 [2024-07-15 15:20:47.784679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.483 ms 00:25:09.819 [2024-07-15 15:20:47.784688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.819 [2024-07-15 15:20:47.826092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.819 [2024-07-15 15:20:47.826145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:09.819 [2024-07-15 15:20:47.826159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.465 ms 00:25:09.819 [2024-07-15 15:20:47.826166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.819 [2024-07-15 15:20:47.869334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.819 [2024-07-15 15:20:47.869397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:09.819 [2024-07-15 15:20:47.869410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.203 ms 00:25:09.819 [2024-07-15 15:20:47.869418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.819 [2024-07-15 15:20:47.912633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.819 [2024-07-15 15:20:47.912686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:09.819 [2024-07-15 15:20:47.912700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.221 ms 00:25:09.819 [2024-07-15 15:20:47.912725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.079 [2024-07-15 15:20:47.955084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.079 [2024-07-15 15:20:47.955156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:10.079 [2024-07-15 15:20:47.955171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.327 ms 00:25:10.079 [2024-07-15 15:20:47.955180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.079 [2024-07-15 15:20:47.955233] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:10.079 [2024-07-15 15:20:47.955250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:25:10.079 [2024-07-15 15:20:47.955261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:10.079 [2024-07-15 15:20:47.955692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.955997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.956018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.956026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.956034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.956043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.956050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.956058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.956065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.956073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:10.080 [2024-07-15 15:20:47.956088] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:10.080 [2024-07-15 15:20:47.956096] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9bb54792-33fc-4be3-bb67-de335ccaec96 00:25:10.080 [2024-07-15 15:20:47.956104] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:25:10.080 [2024-07-15 15:20:47.956111] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 15296 00:25:10.080 [2024-07-15 15:20:47.956118] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 14336 00:25:10.080 [2024-07-15 15:20:47.956126] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0670 00:25:10.080 [2024-07-15 15:20:47.956133] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:10.080 [2024-07-15 15:20:47.956146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:10.080 [2024-07-15 15:20:47.956154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:10.080 [2024-07-15 15:20:47.956160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:10.080 [2024-07-15 15:20:47.956166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:10.080 [2024-07-15 15:20:47.956185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.080 [2024-07-15 15:20:47.956192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:10.080 [2024-07-15 15:20:47.956204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:25:10.080 [2024-07-15 15:20:47.956211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.080 [2024-07-15 15:20:47.978112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.080 [2024-07-15 15:20:47.978168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:10.080 [2024-07-15 15:20:47.978179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.906 ms 00:25:10.080 [2024-07-15 15:20:47.978197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.080 [2024-07-15 15:20:47.978770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.080 [2024-07-15 15:20:47.978784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:10.080 [2024-07-15 15:20:47.978793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:25:10.080 [2024-07-15 15:20:47.978802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.080 [2024-07-15 15:20:48.025687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.080 [2024-07-15 15:20:48.025744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:10.080 [2024-07-15 15:20:48.025757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.080 [2024-07-15 15:20:48.025764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.080 [2024-07-15 15:20:48.025834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.080 [2024-07-15 15:20:48.025841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:10.080 [2024-07-15 15:20:48.025849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.080 [2024-07-15 15:20:48.025856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.080 [2024-07-15 15:20:48.025921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.080 [2024-07-15 15:20:48.025933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:10.080 [2024-07-15 15:20:48.025940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.080 [2024-07-15 15:20:48.025947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.080 [2024-07-15 15:20:48.025965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.080 [2024-07-15 15:20:48.025972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:10.080 [2024-07-15 15:20:48.025979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.080 [2024-07-15 15:20:48.025986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.080 [2024-07-15 15:20:48.150189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.080 [2024-07-15 15:20:48.150252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:10.080 [2024-07-15 15:20:48.150264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.080 [2024-07-15 15:20:48.150291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-07-15 15:20:48.258412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.348 [2024-07-15 15:20:48.258466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:10.348 [2024-07-15 15:20:48.258479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.348 [2024-07-15 15:20:48.258488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-07-15 15:20:48.258562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.348 [2024-07-15 15:20:48.258573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:10.348 [2024-07-15 15:20:48.258582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.348 [2024-07-15 15:20:48.258590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-07-15 15:20:48.258620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.348 [2024-07-15 15:20:48.258628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:10.348 [2024-07-15 15:20:48.258641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.348 [2024-07-15 15:20:48.258648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-07-15 15:20:48.258750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.348 [2024-07-15 15:20:48.258761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:10.348 [2024-07-15 15:20:48.258769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.348 [2024-07-15 15:20:48.258777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-07-15 15:20:48.258808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.348 [2024-07-15 15:20:48.258818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:10.348 [2024-07-15 15:20:48.258829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.348 [2024-07-15 15:20:48.258836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-07-15 15:20:48.258874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.348 [2024-07-15 15:20:48.258883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:10.348 [2024-07-15 15:20:48.258890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.348 [2024-07-15 15:20:48.258897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-07-15 15:20:48.258937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.348 [2024-07-15 15:20:48.258945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:10.348 [2024-07-15 15:20:48.258956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.348 [2024-07-15 15:20:48.258963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-07-15 15:20:48.259120] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 655.258 ms, result 0 00:25:11.724 00:25:11.724 00:25:11.724 15:20:49 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:13.656 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:13.656 15:20:51 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:13.656 15:20:51 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:13.656 15:20:51 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:13.656 15:20:51 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:13.656 15:20:51 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:13.656 Process with pid 82353 is not found 00:25:13.656 Remove shared memory files 00:25:13.656 15:20:51 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 82353 00:25:13.656 15:20:51 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 82353 ']' 00:25:13.656 15:20:51 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 82353 00:25:13.656 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (82353) - No such process 00:25:13.656 15:20:51 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'Process with pid 82353 is not found' 00:25:13.656 15:20:51 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:13.656 15:20:51 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:13.656 15:20:51 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:13.656 15:20:51 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:13.656 15:20:51 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:13.656 15:20:51 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:13.656 15:20:51 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:13.656 ************************************ 00:25:13.656 END TEST ftl_restore 00:25:13.656 ************************************ 00:25:13.656 00:25:13.656 real 2m42.524s 00:25:13.656 user 2m31.972s 00:25:13.656 sys 0m12.093s 00:25:13.656 15:20:51 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:13.656 15:20:51 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:13.656 15:20:51 ftl -- common/autotest_common.sh@1142 -- # return 0 00:25:13.656 15:20:51 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:13.656 15:20:51 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:13.656 15:20:51 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:13.656 15:20:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:13.656 ************************************ 00:25:13.656 START TEST ftl_dirty_shutdown 00:25:13.656 ************************************ 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:13.656 * Looking for test storage... 00:25:13.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=84108 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 84108 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@829 -- # '[' -z 84108 ']' 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:13.656 15:20:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:13.656 [2024-07-15 15:20:51.736127] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:25:13.657 [2024-07-15 15:20:51.736350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84108 ] 00:25:13.918 [2024-07-15 15:20:51.900366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.177 [2024-07-15 15:20:52.134113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.131 15:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:15.131 15:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # return 0 00:25:15.131 15:20:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:15.131 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:15.131 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:15.131 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:15.131 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:15.131 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:15.388 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:15.388 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:15.388 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:15.388 15:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:25:15.388 15:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:15.388 15:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:15.388 15:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:15.388 15:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:15.646 15:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:15.646 { 00:25:15.646 "name": "nvme0n1", 00:25:15.646 "aliases": [ 00:25:15.646 "9cfa443f-5f1d-4729-ac05-c9e4fe9f6257" 00:25:15.646 ], 00:25:15.646 "product_name": "NVMe disk", 00:25:15.646 "block_size": 4096, 00:25:15.646 "num_blocks": 1310720, 00:25:15.646 "uuid": "9cfa443f-5f1d-4729-ac05-c9e4fe9f6257", 00:25:15.646 "assigned_rate_limits": { 00:25:15.646 "rw_ios_per_sec": 0, 00:25:15.646 "rw_mbytes_per_sec": 0, 00:25:15.646 "r_mbytes_per_sec": 0, 00:25:15.646 "w_mbytes_per_sec": 0 00:25:15.646 }, 00:25:15.646 "claimed": true, 00:25:15.646 "claim_type": "read_many_write_one", 00:25:15.646 "zoned": false, 00:25:15.646 "supported_io_types": { 00:25:15.646 "read": true, 00:25:15.646 "write": true, 00:25:15.646 "unmap": true, 00:25:15.646 "flush": true, 00:25:15.646 "reset": true, 00:25:15.646 "nvme_admin": true, 00:25:15.646 "nvme_io": true, 00:25:15.646 "nvme_io_md": false, 00:25:15.646 "write_zeroes": true, 00:25:15.646 "zcopy": false, 00:25:15.646 "get_zone_info": false, 00:25:15.646 "zone_management": false, 00:25:15.646 "zone_append": false, 00:25:15.646 "compare": true, 00:25:15.646 "compare_and_write": false, 00:25:15.646 "abort": true, 00:25:15.646 "seek_hole": false, 00:25:15.646 "seek_data": false, 00:25:15.646 "copy": true, 00:25:15.646 "nvme_iov_md": false 00:25:15.646 }, 00:25:15.646 "driver_specific": { 00:25:15.646 "nvme": [ 00:25:15.646 { 00:25:15.646 "pci_address": "0000:00:11.0", 00:25:15.646 "trid": { 00:25:15.646 "trtype": "PCIe", 00:25:15.646 "traddr": "0000:00:11.0" 00:25:15.646 }, 00:25:15.646 "ctrlr_data": { 00:25:15.646 "cntlid": 0, 00:25:15.646 "vendor_id": "0x1b36", 00:25:15.646 "model_number": "QEMU NVMe Ctrl", 00:25:15.646 "serial_number": "12341", 00:25:15.646 "firmware_revision": "8.0.0", 00:25:15.646 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:15.646 "oacs": { 00:25:15.646 "security": 0, 00:25:15.646 "format": 1, 00:25:15.646 "firmware": 0, 00:25:15.646 "ns_manage": 1 00:25:15.646 }, 00:25:15.646 "multi_ctrlr": false, 00:25:15.646 "ana_reporting": false 00:25:15.646 }, 00:25:15.646 "vs": { 00:25:15.646 "nvme_version": "1.4" 00:25:15.646 }, 00:25:15.646 "ns_data": { 00:25:15.646 "id": 1, 00:25:15.646 "can_share": false 00:25:15.646 } 00:25:15.646 } 00:25:15.646 ], 00:25:15.646 "mp_policy": "active_passive" 00:25:15.646 } 00:25:15.646 } 00:25:15.646 ]' 00:25:15.646 15:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:15.646 15:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:15.646 15:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:15.646 15:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:15.646 15:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:15.646 15:20:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:25:15.646 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:15.646 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:15.646 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:15.646 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:15.646 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:15.903 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=32828d38-88ce-42d5-ae8c-60f363f727c8 00:25:15.903 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:15.903 15:20:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32828d38-88ce-42d5-ae8c-60f363f727c8 00:25:16.160 15:20:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:16.160 15:20:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=cc1af4f6-c736-4cbd-999f-e3f047d3d87c 00:25:16.160 15:20:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u cc1af4f6-c736-4cbd-999f-e3f047d3d87c 00:25:16.418 15:20:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=9afa12b2-01c4-4dba-b959-faa568984fe3 00:25:16.418 15:20:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:16.418 15:20:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9afa12b2-01c4-4dba-b959-faa568984fe3 00:25:16.418 15:20:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:16.418 15:20:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:16.418 15:20:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=9afa12b2-01c4-4dba-b959-faa568984fe3 00:25:16.419 15:20:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:16.419 15:20:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 9afa12b2-01c4-4dba-b959-faa568984fe3 00:25:16.419 15:20:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=9afa12b2-01c4-4dba-b959-faa568984fe3 00:25:16.419 15:20:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:16.419 15:20:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:16.419 15:20:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:16.419 15:20:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9afa12b2-01c4-4dba-b959-faa568984fe3 00:25:16.677 15:20:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:16.677 { 00:25:16.677 "name": "9afa12b2-01c4-4dba-b959-faa568984fe3", 00:25:16.677 "aliases": [ 00:25:16.677 "lvs/nvme0n1p0" 00:25:16.677 ], 00:25:16.677 "product_name": "Logical Volume", 00:25:16.677 "block_size": 4096, 00:25:16.677 "num_blocks": 26476544, 00:25:16.677 "uuid": "9afa12b2-01c4-4dba-b959-faa568984fe3", 00:25:16.677 "assigned_rate_limits": { 00:25:16.677 "rw_ios_per_sec": 0, 00:25:16.677 "rw_mbytes_per_sec": 0, 00:25:16.677 "r_mbytes_per_sec": 0, 00:25:16.677 "w_mbytes_per_sec": 0 00:25:16.677 }, 00:25:16.677 "claimed": false, 00:25:16.677 "zoned": false, 00:25:16.677 "supported_io_types": { 00:25:16.677 "read": true, 00:25:16.677 "write": true, 00:25:16.677 "unmap": true, 00:25:16.677 "flush": false, 00:25:16.677 "reset": true, 00:25:16.677 "nvme_admin": false, 00:25:16.677 "nvme_io": false, 00:25:16.677 "nvme_io_md": false, 00:25:16.677 "write_zeroes": true, 00:25:16.677 "zcopy": false, 00:25:16.677 "get_zone_info": false, 00:25:16.677 "zone_management": false, 00:25:16.677 "zone_append": false, 00:25:16.677 "compare": false, 00:25:16.677 "compare_and_write": false, 00:25:16.677 "abort": false, 00:25:16.677 "seek_hole": true, 00:25:16.677 "seek_data": true, 00:25:16.677 "copy": false, 00:25:16.677 "nvme_iov_md": false 00:25:16.677 }, 00:25:16.677 "driver_specific": { 00:25:16.677 "lvol": { 00:25:16.677 "lvol_store_uuid": "cc1af4f6-c736-4cbd-999f-e3f047d3d87c", 00:25:16.677 "base_bdev": "nvme0n1", 00:25:16.677 "thin_provision": true, 00:25:16.677 "num_allocated_clusters": 0, 00:25:16.677 "snapshot": false, 00:25:16.677 "clone": false, 00:25:16.677 "esnap_clone": false 00:25:16.677 } 00:25:16.677 } 00:25:16.677 } 00:25:16.677 ]' 00:25:16.677 15:20:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:16.677 15:20:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:16.677 15:20:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:16.677 15:20:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:16.677 15:20:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:16.677 15:20:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:25:16.677 15:20:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:16.677 15:20:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:16.677 15:20:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:16.936 15:20:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:16.936 15:20:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:16.936 15:20:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 9afa12b2-01c4-4dba-b959-faa568984fe3 00:25:16.936 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=9afa12b2-01c4-4dba-b959-faa568984fe3 00:25:16.936 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:16.936 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:16.936 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:16.936 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9afa12b2-01c4-4dba-b959-faa568984fe3 00:25:17.195 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:17.195 { 00:25:17.195 "name": "9afa12b2-01c4-4dba-b959-faa568984fe3", 00:25:17.195 "aliases": [ 00:25:17.195 "lvs/nvme0n1p0" 00:25:17.195 ], 00:25:17.195 "product_name": "Logical Volume", 00:25:17.195 "block_size": 4096, 00:25:17.195 "num_blocks": 26476544, 00:25:17.195 "uuid": "9afa12b2-01c4-4dba-b959-faa568984fe3", 00:25:17.195 "assigned_rate_limits": { 00:25:17.195 "rw_ios_per_sec": 0, 00:25:17.195 "rw_mbytes_per_sec": 0, 00:25:17.195 "r_mbytes_per_sec": 0, 00:25:17.195 "w_mbytes_per_sec": 0 00:25:17.195 }, 00:25:17.195 "claimed": false, 00:25:17.195 "zoned": false, 00:25:17.195 "supported_io_types": { 00:25:17.195 "read": true, 00:25:17.195 "write": true, 00:25:17.195 "unmap": true, 00:25:17.195 "flush": false, 00:25:17.195 "reset": true, 00:25:17.195 "nvme_admin": false, 00:25:17.195 "nvme_io": false, 00:25:17.195 "nvme_io_md": false, 00:25:17.195 "write_zeroes": true, 00:25:17.195 "zcopy": false, 00:25:17.195 "get_zone_info": false, 00:25:17.195 "zone_management": false, 00:25:17.195 "zone_append": false, 00:25:17.195 "compare": false, 00:25:17.195 "compare_and_write": false, 00:25:17.195 "abort": false, 00:25:17.195 "seek_hole": true, 00:25:17.195 "seek_data": true, 00:25:17.195 "copy": false, 00:25:17.195 "nvme_iov_md": false 00:25:17.195 }, 00:25:17.195 "driver_specific": { 00:25:17.195 "lvol": { 00:25:17.195 "lvol_store_uuid": "cc1af4f6-c736-4cbd-999f-e3f047d3d87c", 00:25:17.195 "base_bdev": "nvme0n1", 00:25:17.195 "thin_provision": true, 00:25:17.195 "num_allocated_clusters": 0, 00:25:17.195 "snapshot": false, 00:25:17.195 "clone": false, 00:25:17.195 "esnap_clone": false 00:25:17.195 } 00:25:17.195 } 00:25:17.195 } 00:25:17.195 ]' 00:25:17.195 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:17.195 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:17.195 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:17.195 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:17.195 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:17.195 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:25:17.195 15:20:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:17.195 15:20:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:17.454 15:20:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:17.454 15:20:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 9afa12b2-01c4-4dba-b959-faa568984fe3 00:25:17.454 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=9afa12b2-01c4-4dba-b959-faa568984fe3 00:25:17.454 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:17.454 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:17.454 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:17.454 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9afa12b2-01c4-4dba-b959-faa568984fe3 00:25:17.712 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:17.712 { 00:25:17.712 "name": "9afa12b2-01c4-4dba-b959-faa568984fe3", 00:25:17.712 "aliases": [ 00:25:17.712 "lvs/nvme0n1p0" 00:25:17.712 ], 00:25:17.712 "product_name": "Logical Volume", 00:25:17.712 "block_size": 4096, 00:25:17.712 "num_blocks": 26476544, 00:25:17.712 "uuid": "9afa12b2-01c4-4dba-b959-faa568984fe3", 00:25:17.712 "assigned_rate_limits": { 00:25:17.712 "rw_ios_per_sec": 0, 00:25:17.712 "rw_mbytes_per_sec": 0, 00:25:17.712 "r_mbytes_per_sec": 0, 00:25:17.712 "w_mbytes_per_sec": 0 00:25:17.712 }, 00:25:17.712 "claimed": false, 00:25:17.712 "zoned": false, 00:25:17.712 "supported_io_types": { 00:25:17.712 "read": true, 00:25:17.712 "write": true, 00:25:17.712 "unmap": true, 00:25:17.712 "flush": false, 00:25:17.712 "reset": true, 00:25:17.712 "nvme_admin": false, 00:25:17.712 "nvme_io": false, 00:25:17.712 "nvme_io_md": false, 00:25:17.712 "write_zeroes": true, 00:25:17.712 "zcopy": false, 00:25:17.712 "get_zone_info": false, 00:25:17.712 "zone_management": false, 00:25:17.712 "zone_append": false, 00:25:17.712 "compare": false, 00:25:17.712 "compare_and_write": false, 00:25:17.712 "abort": false, 00:25:17.712 "seek_hole": true, 00:25:17.712 "seek_data": true, 00:25:17.712 "copy": false, 00:25:17.712 "nvme_iov_md": false 00:25:17.712 }, 00:25:17.712 "driver_specific": { 00:25:17.712 "lvol": { 00:25:17.712 "lvol_store_uuid": "cc1af4f6-c736-4cbd-999f-e3f047d3d87c", 00:25:17.712 "base_bdev": "nvme0n1", 00:25:17.712 "thin_provision": true, 00:25:17.712 "num_allocated_clusters": 0, 00:25:17.712 "snapshot": false, 00:25:17.712 "clone": false, 00:25:17.712 "esnap_clone": false 00:25:17.712 } 00:25:17.712 } 00:25:17.712 } 00:25:17.712 ]' 00:25:17.712 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:17.712 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:17.712 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:17.712 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:17.712 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:17.712 15:20:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:25:17.712 15:20:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:17.712 15:20:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9afa12b2-01c4-4dba-b959-faa568984fe3 --l2p_dram_limit 10' 00:25:17.712 15:20:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:17.712 15:20:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:17.712 15:20:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:17.712 15:20:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9afa12b2-01c4-4dba-b959-faa568984fe3 --l2p_dram_limit 10 -c nvc0n1p0 00:25:17.971 [2024-07-15 15:20:55.947951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.971 [2024-07-15 15:20:55.948026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:17.971 [2024-07-15 15:20:55.948058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:17.971 [2024-07-15 15:20:55.948069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.971 [2024-07-15 15:20:55.948139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.971 [2024-07-15 15:20:55.948152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:17.971 [2024-07-15 15:20:55.948161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:17.971 [2024-07-15 15:20:55.948171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.971 [2024-07-15 15:20:55.948193] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:17.971 [2024-07-15 15:20:55.949510] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:17.971 [2024-07-15 15:20:55.949535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.971 [2024-07-15 15:20:55.949548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:17.971 [2024-07-15 15:20:55.949557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:25:17.971 [2024-07-15 15:20:55.949566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.971 [2024-07-15 15:20:55.949598] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3e1764c2-857f-4f54-b9e9-71a481f2125b 00:25:17.971 [2024-07-15 15:20:55.951082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.972 [2024-07-15 15:20:55.951115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:17.972 [2024-07-15 15:20:55.951128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:17.972 [2024-07-15 15:20:55.951136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.972 [2024-07-15 15:20:55.958637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.972 [2024-07-15 15:20:55.958666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:17.972 [2024-07-15 15:20:55.958680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.431 ms 00:25:17.972 [2024-07-15 15:20:55.958703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.972 [2024-07-15 15:20:55.958818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.972 [2024-07-15 15:20:55.958833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:17.972 [2024-07-15 15:20:55.958844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:25:17.972 [2024-07-15 15:20:55.958851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.972 [2024-07-15 15:20:55.958924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.972 [2024-07-15 15:20:55.958934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:17.972 [2024-07-15 15:20:55.958944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:17.972 [2024-07-15 15:20:55.958953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.972 [2024-07-15 15:20:55.958980] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:17.972 [2024-07-15 15:20:55.964754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.972 [2024-07-15 15:20:55.964785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:17.972 [2024-07-15 15:20:55.964795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.796 ms 00:25:17.972 [2024-07-15 15:20:55.964806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.972 [2024-07-15 15:20:55.964842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.972 [2024-07-15 15:20:55.964853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:17.972 [2024-07-15 15:20:55.964861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:17.972 [2024-07-15 15:20:55.964869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.972 [2024-07-15 15:20:55.964910] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:17.972 [2024-07-15 15:20:55.965056] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:17.972 [2024-07-15 15:20:55.965068] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:17.972 [2024-07-15 15:20:55.965082] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:17.972 [2024-07-15 15:20:55.965093] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:17.972 [2024-07-15 15:20:55.965121] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:17.972 [2024-07-15 15:20:55.965129] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:17.972 [2024-07-15 15:20:55.965139] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:17.972 [2024-07-15 15:20:55.965148] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:17.972 [2024-07-15 15:20:55.965158] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:17.972 [2024-07-15 15:20:55.965166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.972 [2024-07-15 15:20:55.965175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:17.972 [2024-07-15 15:20:55.965183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:25:17.972 [2024-07-15 15:20:55.965191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.972 [2024-07-15 15:20:55.965263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.972 [2024-07-15 15:20:55.965273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:17.972 [2024-07-15 15:20:55.965281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:17.972 [2024-07-15 15:20:55.965291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.972 [2024-07-15 15:20:55.965377] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:17.972 [2024-07-15 15:20:55.965396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:17.972 [2024-07-15 15:20:55.965418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:17.972 [2024-07-15 15:20:55.965428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.972 [2024-07-15 15:20:55.965436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:17.972 [2024-07-15 15:20:55.965445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:17.972 [2024-07-15 15:20:55.965451] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:17.972 [2024-07-15 15:20:55.965459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:17.972 [2024-07-15 15:20:55.965466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:17.972 [2024-07-15 15:20:55.965474] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:17.972 [2024-07-15 15:20:55.965481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:17.972 [2024-07-15 15:20:55.965490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:17.972 [2024-07-15 15:20:55.965496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:17.972 [2024-07-15 15:20:55.965505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:17.972 [2024-07-15 15:20:55.965512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:17.972 [2024-07-15 15:20:55.965520] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.972 [2024-07-15 15:20:55.965527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:17.972 [2024-07-15 15:20:55.965664] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:17.972 [2024-07-15 15:20:55.965671] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.972 [2024-07-15 15:20:55.965679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:17.972 [2024-07-15 15:20:55.965686] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:17.972 [2024-07-15 15:20:55.965711] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.972 [2024-07-15 15:20:55.965718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:17.972 [2024-07-15 15:20:55.965727] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:17.972 [2024-07-15 15:20:55.965734] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.972 [2024-07-15 15:20:55.965744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:17.972 [2024-07-15 15:20:55.965751] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:17.972 [2024-07-15 15:20:55.965759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.972 [2024-07-15 15:20:55.965767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:17.972 [2024-07-15 15:20:55.965776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:17.972 [2024-07-15 15:20:55.965783] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.972 [2024-07-15 15:20:55.965792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:17.972 [2024-07-15 15:20:55.965799] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:17.972 [2024-07-15 15:20:55.965809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:17.972 [2024-07-15 15:20:55.965817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:17.972 [2024-07-15 15:20:55.965826] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:17.972 [2024-07-15 15:20:55.965833] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:17.972 [2024-07-15 15:20:55.965842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:17.972 [2024-07-15 15:20:55.965849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:17.972 [2024-07-15 15:20:55.965859] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.972 [2024-07-15 15:20:55.965866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:17.972 [2024-07-15 15:20:55.965875] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:17.972 [2024-07-15 15:20:55.965883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.972 [2024-07-15 15:20:55.965891] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:17.972 [2024-07-15 15:20:55.965900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:17.972 [2024-07-15 15:20:55.965910] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:17.972 [2024-07-15 15:20:55.965917] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.972 [2024-07-15 15:20:55.965927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:17.972 [2024-07-15 15:20:55.965935] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:17.972 [2024-07-15 15:20:55.965946] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:17.972 [2024-07-15 15:20:55.965953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:17.972 [2024-07-15 15:20:55.965962] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:17.972 [2024-07-15 15:20:55.965969] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:17.972 [2024-07-15 15:20:55.965983] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:17.972 [2024-07-15 15:20:55.965994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:17.972 [2024-07-15 15:20:55.966018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:17.972 [2024-07-15 15:20:55.966027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:17.972 [2024-07-15 15:20:55.966037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:17.972 [2024-07-15 15:20:55.966045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:17.972 [2024-07-15 15:20:55.966055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:17.972 [2024-07-15 15:20:55.966062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:17.972 [2024-07-15 15:20:55.966073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:17.972 [2024-07-15 15:20:55.966081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:17.972 [2024-07-15 15:20:55.966092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:17.972 [2024-07-15 15:20:55.966099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:17.972 [2024-07-15 15:20:55.966111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:17.972 [2024-07-15 15:20:55.966119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:17.973 [2024-07-15 15:20:55.966129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:17.973 [2024-07-15 15:20:55.966137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:17.973 [2024-07-15 15:20:55.966146] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:17.973 [2024-07-15 15:20:55.966155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:17.973 [2024-07-15 15:20:55.966165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:17.973 [2024-07-15 15:20:55.966173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:17.973 [2024-07-15 15:20:55.966183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:17.973 [2024-07-15 15:20:55.966192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:17.973 [2024-07-15 15:20:55.966203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.973 [2024-07-15 15:20:55.966211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:17.973 [2024-07-15 15:20:55.966222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:25:17.973 [2024-07-15 15:20:55.966230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.973 [2024-07-15 15:20:55.966278] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:17.973 [2024-07-15 15:20:55.966289] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:21.255 [2024-07-15 15:20:58.778800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:58.778859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:21.255 [2024-07-15 15:20:58.778879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2817.938 ms 00:25:21.255 [2024-07-15 15:20:58.778889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:58.825024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:58.825071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:21.255 [2024-07-15 15:20:58.825086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.883 ms 00:25:21.255 [2024-07-15 15:20:58.825095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:58.825255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:58.825266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:21.255 [2024-07-15 15:20:58.825278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:21.255 [2024-07-15 15:20:58.825288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:58.876508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:58.876554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:21.255 [2024-07-15 15:20:58.876568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.280 ms 00:25:21.255 [2024-07-15 15:20:58.876575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:58.876629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:58.876643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:21.255 [2024-07-15 15:20:58.876653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:21.255 [2024-07-15 15:20:58.876659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:58.877144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:58.877156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:21.255 [2024-07-15 15:20:58.877166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:25:21.255 [2024-07-15 15:20:58.877172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:58.877279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:58.877292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:21.255 [2024-07-15 15:20:58.877304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:25:21.255 [2024-07-15 15:20:58.877311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:58.897788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:58.897834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:21.255 [2024-07-15 15:20:58.897848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.490 ms 00:25:21.255 [2024-07-15 15:20:58.897856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:58.912072] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:21.255 [2024-07-15 15:20:58.915390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:58.915425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:21.255 [2024-07-15 15:20:58.915437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.461 ms 00:25:21.255 [2024-07-15 15:20:58.915446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:59.019707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:59.019771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:21.255 [2024-07-15 15:20:59.019786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.417 ms 00:25:21.255 [2024-07-15 15:20:59.019796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:59.019984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:59.020012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:21.255 [2024-07-15 15:20:59.020021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:25:21.255 [2024-07-15 15:20:59.020032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:59.058997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:59.059057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:21.255 [2024-07-15 15:20:59.059070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.991 ms 00:25:21.255 [2024-07-15 15:20:59.059080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:59.098633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:59.098690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:21.255 [2024-07-15 15:20:59.098703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.580 ms 00:25:21.255 [2024-07-15 15:20:59.098712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:59.099484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:59.099513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:21.255 [2024-07-15 15:20:59.099523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:25:21.255 [2024-07-15 15:20:59.099535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:59.215380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:59.215443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:21.255 [2024-07-15 15:20:59.215475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.012 ms 00:25:21.255 [2024-07-15 15:20:59.215489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:59.256212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:59.256271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:21.255 [2024-07-15 15:20:59.256285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.753 ms 00:25:21.255 [2024-07-15 15:20:59.256310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:59.297499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:59.297560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:21.255 [2024-07-15 15:20:59.297573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.221 ms 00:25:21.255 [2024-07-15 15:20:59.297598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:59.337746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:59.337799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:21.255 [2024-07-15 15:20:59.337812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.178 ms 00:25:21.255 [2024-07-15 15:20:59.337838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:59.337899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:59.337914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:21.255 [2024-07-15 15:20:59.337924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:21.255 [2024-07-15 15:20:59.337936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:59.338047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.255 [2024-07-15 15:20:59.338061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:21.255 [2024-07-15 15:20:59.338072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:21.255 [2024-07-15 15:20:59.338080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.255 [2024-07-15 15:20:59.339216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3397.278 ms, result 0 00:25:21.255 { 00:25:21.255 "name": "ftl0", 00:25:21.255 "uuid": "3e1764c2-857f-4f54-b9e9-71a481f2125b" 00:25:21.255 } 00:25:21.514 15:20:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:21.514 15:20:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:21.514 15:20:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:21.514 15:20:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:21.514 15:20:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:21.774 /dev/nbd0 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local i 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # break 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:21.774 1+0 records in 00:25:21.774 1+0 records out 00:25:21.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428639 s, 9.6 MB/s 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # size=4096 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # return 0 00:25:21.774 15:20:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:21.774 [2024-07-15 15:20:59.858138] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:25:21.774 [2024-07-15 15:20:59.858273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84246 ] 00:25:22.033 [2024-07-15 15:21:00.014129] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.291 [2024-07-15 15:21:00.241864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.332  Copying: 232/1024 [MB] (232 MBps) Copying: 477/1024 [MB] (245 MBps) Copying: 705/1024 [MB] (228 MBps) Copying: 935/1024 [MB] (230 MBps) Copying: 1024/1024 [MB] (average 234 MBps) 00:25:28.332 00:25:28.332 15:21:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:30.233 15:21:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:30.233 [2024-07-15 15:21:08.227800] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:25:30.233 [2024-07-15 15:21:08.227918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84333 ] 00:25:30.492 [2024-07-15 15:21:08.393083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.750 [2024-07-15 15:21:08.633762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.477  Copying: 21/1024 [MB] (21 MBps) Copying: 42/1024 [MB] (20 MBps) Copying: 63/1024 [MB] (20 MBps) Copying: 83/1024 [MB] (20 MBps) Copying: 104/1024 [MB] (20 MBps) Copying: 123/1024 [MB] (18 MBps) Copying: 145/1024 [MB] (22 MBps) Copying: 168/1024 [MB] (23 MBps) Copying: 192/1024 [MB] (23 MBps) Copying: 215/1024 [MB] (23 MBps) Copying: 238/1024 [MB] (22 MBps) Copying: 261/1024 [MB] (23 MBps) Copying: 284/1024 [MB] (22 MBps) Copying: 307/1024 [MB] (23 MBps) Copying: 331/1024 [MB] (23 MBps) Copying: 352/1024 [MB] (21 MBps) Copying: 374/1024 [MB] (22 MBps) Copying: 397/1024 [MB] (22 MBps) Copying: 419/1024 [MB] (22 MBps) Copying: 441/1024 [MB] (22 MBps) Copying: 463/1024 [MB] (21 MBps) Copying: 485/1024 [MB] (22 MBps) Copying: 507/1024 [MB] (21 MBps) Copying: 529/1024 [MB] (21 MBps) Copying: 550/1024 [MB] (21 MBps) Copying: 572/1024 [MB] (22 MBps) Copying: 594/1024 [MB] (21 MBps) Copying: 616/1024 [MB] (22 MBps) Copying: 639/1024 [MB] (22 MBps) Copying: 662/1024 [MB] (22 MBps) Copying: 684/1024 [MB] (22 MBps) Copying: 707/1024 [MB] (22 MBps) Copying: 729/1024 [MB] (22 MBps) Copying: 751/1024 [MB] (22 MBps) Copying: 774/1024 [MB] (22 MBps) Copying: 796/1024 [MB] (22 MBps) Copying: 818/1024 [MB] (21 MBps) Copying: 840/1024 [MB] (22 MBps) Copying: 863/1024 [MB] (22 MBps) Copying: 886/1024 [MB] (23 MBps) Copying: 909/1024 [MB] (23 MBps) Copying: 932/1024 [MB] (22 MBps) Copying: 954/1024 [MB] (22 MBps) Copying: 976/1024 [MB] (22 MBps) Copying: 999/1024 [MB] (22 MBps) Copying: 1021/1024 [MB] (21 MBps) Copying: 1024/1024 [MB] (average 22 MBps) 00:26:18.477 00:26:18.477 15:21:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:18.477 15:21:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:18.736 15:21:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:18.736 [2024-07-15 15:21:56.784971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.736 [2024-07-15 15:21:56.785034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:18.736 [2024-07-15 15:21:56.785058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:18.736 [2024-07-15 15:21:56.785066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.736 [2024-07-15 15:21:56.785111] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:18.736 [2024-07-15 15:21:56.789208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.736 [2024-07-15 15:21:56.789244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:18.736 [2024-07-15 15:21:56.789255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.090 ms 00:26:18.736 [2024-07-15 15:21:56.789266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.736 [2024-07-15 15:21:56.791466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.736 [2024-07-15 15:21:56.791513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:18.736 [2024-07-15 15:21:56.791527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.176 ms 00:26:18.736 [2024-07-15 15:21:56.791538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.736 [2024-07-15 15:21:56.809524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.736 [2024-07-15 15:21:56.809571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:18.736 [2024-07-15 15:21:56.809583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.996 ms 00:26:18.736 [2024-07-15 15:21:56.809609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.736 [2024-07-15 15:21:56.815019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.736 [2024-07-15 15:21:56.815055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:18.736 [2024-07-15 15:21:56.815065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.384 ms 00:26:18.736 [2024-07-15 15:21:56.815091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.000 [2024-07-15 15:21:56.860313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.000 [2024-07-15 15:21:56.860366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:19.000 [2024-07-15 15:21:56.860380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.228 ms 00:26:19.000 [2024-07-15 15:21:56.860389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.000 [2024-07-15 15:21:56.885109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.000 [2024-07-15 15:21:56.885162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:19.000 [2024-07-15 15:21:56.885179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.709 ms 00:26:19.000 [2024-07-15 15:21:56.885189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.000 [2024-07-15 15:21:56.885353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.000 [2024-07-15 15:21:56.885373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:19.000 [2024-07-15 15:21:56.885382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:26:19.000 [2024-07-15 15:21:56.885392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.000 [2024-07-15 15:21:56.924809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.000 [2024-07-15 15:21:56.924855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:19.000 [2024-07-15 15:21:56.924868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.474 ms 00:26:19.000 [2024-07-15 15:21:56.924894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.000 [2024-07-15 15:21:56.964908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.000 [2024-07-15 15:21:56.964949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:19.000 [2024-07-15 15:21:56.964961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.047 ms 00:26:19.000 [2024-07-15 15:21:56.964970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.000 [2024-07-15 15:21:57.003157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.000 [2024-07-15 15:21:57.003198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:19.000 [2024-07-15 15:21:57.003210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.208 ms 00:26:19.000 [2024-07-15 15:21:57.003235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.000 [2024-07-15 15:21:57.043816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.000 [2024-07-15 15:21:57.043863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:19.000 [2024-07-15 15:21:57.043876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.564 ms 00:26:19.000 [2024-07-15 15:21:57.043885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.000 [2024-07-15 15:21:57.043928] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:19.000 [2024-07-15 15:21:57.043946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.043956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.043967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.043975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.043985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:19.000 [2024-07-15 15:21:57.044399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:19.001 [2024-07-15 15:21:57.044907] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:19.001 [2024-07-15 15:21:57.044915] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3e1764c2-857f-4f54-b9e9-71a481f2125b 00:26:19.001 [2024-07-15 15:21:57.044924] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:19.001 [2024-07-15 15:21:57.044932] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:19.001 [2024-07-15 15:21:57.044948] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:19.001 [2024-07-15 15:21:57.044956] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:19.001 [2024-07-15 15:21:57.044965] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:19.001 [2024-07-15 15:21:57.044973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:19.001 [2024-07-15 15:21:57.044982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:19.001 [2024-07-15 15:21:57.044988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:19.001 [2024-07-15 15:21:57.045004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:19.001 [2024-07-15 15:21:57.045012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.001 [2024-07-15 15:21:57.045021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:19.001 [2024-07-15 15:21:57.045029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:26:19.001 [2024-07-15 15:21:57.045038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.001 [2024-07-15 15:21:57.065891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.001 [2024-07-15 15:21:57.065931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:19.001 [2024-07-15 15:21:57.065942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.844 ms 00:26:19.001 [2024-07-15 15:21:57.065951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.001 [2024-07-15 15:21:57.066583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.001 [2024-07-15 15:21:57.066603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:19.001 [2024-07-15 15:21:57.066613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:26:19.001 [2024-07-15 15:21:57.066623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.260 [2024-07-15 15:21:57.131887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.260 [2024-07-15 15:21:57.131945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:19.260 [2024-07-15 15:21:57.131958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.260 [2024-07-15 15:21:57.131966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.260 [2024-07-15 15:21:57.132065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.260 [2024-07-15 15:21:57.132076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:19.260 [2024-07-15 15:21:57.132085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.260 [2024-07-15 15:21:57.132094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.260 [2024-07-15 15:21:57.132196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.260 [2024-07-15 15:21:57.132214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:19.260 [2024-07-15 15:21:57.132223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.260 [2024-07-15 15:21:57.132231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.260 [2024-07-15 15:21:57.132252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.260 [2024-07-15 15:21:57.132265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:19.260 [2024-07-15 15:21:57.132272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.260 [2024-07-15 15:21:57.132281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.260 [2024-07-15 15:21:57.250446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.260 [2024-07-15 15:21:57.250510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:19.261 [2024-07-15 15:21:57.250523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.261 [2024-07-15 15:21:57.250539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.261 [2024-07-15 15:21:57.354465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.261 [2024-07-15 15:21:57.354536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:19.261 [2024-07-15 15:21:57.354548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.261 [2024-07-15 15:21:57.354557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.261 [2024-07-15 15:21:57.354646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.261 [2024-07-15 15:21:57.354659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:19.261 [2024-07-15 15:21:57.354670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.261 [2024-07-15 15:21:57.354679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.261 [2024-07-15 15:21:57.354719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.261 [2024-07-15 15:21:57.354733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:19.261 [2024-07-15 15:21:57.354740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.261 [2024-07-15 15:21:57.354749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.261 [2024-07-15 15:21:57.354840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.261 [2024-07-15 15:21:57.354855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:19.261 [2024-07-15 15:21:57.354862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.261 [2024-07-15 15:21:57.354872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.261 [2024-07-15 15:21:57.354906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.261 [2024-07-15 15:21:57.354918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:19.261 [2024-07-15 15:21:57.354925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.261 [2024-07-15 15:21:57.354934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.261 [2024-07-15 15:21:57.354972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.261 [2024-07-15 15:21:57.354982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:19.261 [2024-07-15 15:21:57.355007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.261 [2024-07-15 15:21:57.355019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.261 [2024-07-15 15:21:57.355082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.261 [2024-07-15 15:21:57.355095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:19.261 [2024-07-15 15:21:57.355103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.261 [2024-07-15 15:21:57.355112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.261 [2024-07-15 15:21:57.355258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 571.357 ms, result 0 00:26:19.261 true 00:26:19.518 15:21:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 84108 00:26:19.518 15:21:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid84108 00:26:19.518 15:21:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:19.518 [2024-07-15 15:21:57.460781] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:26:19.518 [2024-07-15 15:21:57.460995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84834 ] 00:26:19.518 [2024-07-15 15:21:57.625930] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.778 [2024-07-15 15:21:57.868294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.807  Copying: 238/1024 [MB] (238 MBps) Copying: 483/1024 [MB] (244 MBps) Copying: 722/1024 [MB] (238 MBps) Copying: 947/1024 [MB] (225 MBps) Copying: 1024/1024 [MB] (average 237 MBps) 00:26:25.807 00:26:25.807 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 84108 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:25.807 15:22:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:26.065 [2024-07-15 15:22:03.972468] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:26:26.065 [2024-07-15 15:22:03.972632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84898 ] 00:26:26.065 [2024-07-15 15:22:04.151450] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.323 [2024-07-15 15:22:04.385657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.890 [2024-07-15 15:22:04.782983] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:26.890 [2024-07-15 15:22:04.783067] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:26.890 [2024-07-15 15:22:04.848474] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:26.890 [2024-07-15 15:22:04.848810] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:26.890 [2024-07-15 15:22:04.849036] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:27.151 [2024-07-15 15:22:05.090633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.151 [2024-07-15 15:22:05.090695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:27.151 [2024-07-15 15:22:05.090710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:27.151 [2024-07-15 15:22:05.090720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.151 [2024-07-15 15:22:05.090796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.151 [2024-07-15 15:22:05.090810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:27.151 [2024-07-15 15:22:05.090819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:27.151 [2024-07-15 15:22:05.090831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.151 [2024-07-15 15:22:05.090854] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:27.151 [2024-07-15 15:22:05.092129] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:27.151 [2024-07-15 15:22:05.092159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.151 [2024-07-15 15:22:05.092169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:27.151 [2024-07-15 15:22:05.092178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.314 ms 00:26:27.151 [2024-07-15 15:22:05.092186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.151 [2024-07-15 15:22:05.093698] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:27.151 [2024-07-15 15:22:05.118341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.151 [2024-07-15 15:22:05.118413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:27.151 [2024-07-15 15:22:05.118430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.711 ms 00:26:27.151 [2024-07-15 15:22:05.118449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.151 [2024-07-15 15:22:05.118583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.151 [2024-07-15 15:22:05.118595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:27.151 [2024-07-15 15:22:05.118605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:27.151 [2024-07-15 15:22:05.118614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.151 [2024-07-15 15:22:05.126257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.151 [2024-07-15 15:22:05.126301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:27.151 [2024-07-15 15:22:05.126319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.554 ms 00:26:27.151 [2024-07-15 15:22:05.126328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.151 [2024-07-15 15:22:05.126441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.151 [2024-07-15 15:22:05.126459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:27.151 [2024-07-15 15:22:05.126469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:27.151 [2024-07-15 15:22:05.126477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.151 [2024-07-15 15:22:05.126546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.151 [2024-07-15 15:22:05.126558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:27.151 [2024-07-15 15:22:05.126568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:27.151 [2024-07-15 15:22:05.126579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.151 [2024-07-15 15:22:05.126608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:27.151 [2024-07-15 15:22:05.132770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.151 [2024-07-15 15:22:05.132814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:27.151 [2024-07-15 15:22:05.132825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.182 ms 00:26:27.151 [2024-07-15 15:22:05.132833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.151 [2024-07-15 15:22:05.132869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.151 [2024-07-15 15:22:05.132879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:27.151 [2024-07-15 15:22:05.132887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:27.151 [2024-07-15 15:22:05.132894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.151 [2024-07-15 15:22:05.132953] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:27.151 [2024-07-15 15:22:05.132976] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:27.151 [2024-07-15 15:22:05.133028] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:27.151 [2024-07-15 15:22:05.133045] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:27.151 [2024-07-15 15:22:05.133154] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:27.151 [2024-07-15 15:22:05.133165] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:27.152 [2024-07-15 15:22:05.133176] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:27.152 [2024-07-15 15:22:05.133188] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:27.152 [2024-07-15 15:22:05.133197] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:27.152 [2024-07-15 15:22:05.133206] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:27.152 [2024-07-15 15:22:05.133217] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:27.152 [2024-07-15 15:22:05.133226] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:27.152 [2024-07-15 15:22:05.133235] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:27.152 [2024-07-15 15:22:05.133243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.152 [2024-07-15 15:22:05.133251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:27.152 [2024-07-15 15:22:05.133260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:26:27.152 [2024-07-15 15:22:05.133268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.152 [2024-07-15 15:22:05.133343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.152 [2024-07-15 15:22:05.133352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:27.152 [2024-07-15 15:22:05.133361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:27.152 [2024-07-15 15:22:05.133369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.152 [2024-07-15 15:22:05.133465] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:27.152 [2024-07-15 15:22:05.133482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:27.152 [2024-07-15 15:22:05.133490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:27.152 [2024-07-15 15:22:05.133500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.152 [2024-07-15 15:22:05.133508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:27.152 [2024-07-15 15:22:05.133516] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:27.152 [2024-07-15 15:22:05.133524] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:27.152 [2024-07-15 15:22:05.133532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:27.152 [2024-07-15 15:22:05.133540] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:27.152 [2024-07-15 15:22:05.133547] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:27.152 [2024-07-15 15:22:05.133554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:27.152 [2024-07-15 15:22:05.133562] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:27.152 [2024-07-15 15:22:05.133569] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:27.152 [2024-07-15 15:22:05.133577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:27.152 [2024-07-15 15:22:05.133584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:27.152 [2024-07-15 15:22:05.133591] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.152 [2024-07-15 15:22:05.133620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:27.152 [2024-07-15 15:22:05.133629] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:27.152 [2024-07-15 15:22:05.133636] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.152 [2024-07-15 15:22:05.133643] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:27.152 [2024-07-15 15:22:05.133651] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:27.152 [2024-07-15 15:22:05.133658] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.152 [2024-07-15 15:22:05.133665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:27.152 [2024-07-15 15:22:05.133672] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:27.152 [2024-07-15 15:22:05.133680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.152 [2024-07-15 15:22:05.133687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:27.152 [2024-07-15 15:22:05.133694] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:27.152 [2024-07-15 15:22:05.133701] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.152 [2024-07-15 15:22:05.133708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:27.152 [2024-07-15 15:22:05.133715] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:27.152 [2024-07-15 15:22:05.133723] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.152 [2024-07-15 15:22:05.133729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:27.152 [2024-07-15 15:22:05.133737] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:27.152 [2024-07-15 15:22:05.133744] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:27.152 [2024-07-15 15:22:05.133751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:27.152 [2024-07-15 15:22:05.133757] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:27.152 [2024-07-15 15:22:05.133764] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:27.152 [2024-07-15 15:22:05.133772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:27.152 [2024-07-15 15:22:05.133780] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:27.152 [2024-07-15 15:22:05.133787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.152 [2024-07-15 15:22:05.133794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:27.152 [2024-07-15 15:22:05.133801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:27.152 [2024-07-15 15:22:05.133807] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.152 [2024-07-15 15:22:05.133814] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:27.152 [2024-07-15 15:22:05.133823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:27.152 [2024-07-15 15:22:05.133830] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:27.152 [2024-07-15 15:22:05.133838] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.152 [2024-07-15 15:22:05.133847] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:27.152 [2024-07-15 15:22:05.133855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:27.152 [2024-07-15 15:22:05.133862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:27.152 [2024-07-15 15:22:05.133870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:27.152 [2024-07-15 15:22:05.133876] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:27.152 [2024-07-15 15:22:05.133884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:27.152 [2024-07-15 15:22:05.133893] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:27.152 [2024-07-15 15:22:05.133906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:27.152 [2024-07-15 15:22:05.133915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:27.152 [2024-07-15 15:22:05.133923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:27.152 [2024-07-15 15:22:05.133931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:27.152 [2024-07-15 15:22:05.133939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:27.152 [2024-07-15 15:22:05.133947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:27.152 [2024-07-15 15:22:05.133955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:27.152 [2024-07-15 15:22:05.133962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:27.152 [2024-07-15 15:22:05.133970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:27.152 [2024-07-15 15:22:05.133978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:27.152 [2024-07-15 15:22:05.133986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:27.152 [2024-07-15 15:22:05.133993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:27.152 [2024-07-15 15:22:05.134001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:27.152 [2024-07-15 15:22:05.134019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:27.152 [2024-07-15 15:22:05.134028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:27.152 [2024-07-15 15:22:05.134036] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:27.152 [2024-07-15 15:22:05.134046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:27.152 [2024-07-15 15:22:05.134055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:27.152 [2024-07-15 15:22:05.134063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:27.152 [2024-07-15 15:22:05.134071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:27.152 [2024-07-15 15:22:05.134080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:27.152 [2024-07-15 15:22:05.134089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.152 [2024-07-15 15:22:05.134097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:27.152 [2024-07-15 15:22:05.134105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:26:27.152 [2024-07-15 15:22:05.134113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.152 [2024-07-15 15:22:05.194808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.152 [2024-07-15 15:22:05.194864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:27.152 [2024-07-15 15:22:05.194879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.749 ms 00:26:27.152 [2024-07-15 15:22:05.194904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.152 [2024-07-15 15:22:05.195038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.152 [2024-07-15 15:22:05.195055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:27.152 [2024-07-15 15:22:05.195076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:26:27.152 [2024-07-15 15:22:05.195087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.152 [2024-07-15 15:22:05.249860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.152 [2024-07-15 15:22:05.249916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:27.152 [2024-07-15 15:22:05.249929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.768 ms 00:26:27.152 [2024-07-15 15:22:05.249936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.152 [2024-07-15 15:22:05.250031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.153 [2024-07-15 15:22:05.250042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:27.153 [2024-07-15 15:22:05.250051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:27.153 [2024-07-15 15:22:05.250059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.153 [2024-07-15 15:22:05.250648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.153 [2024-07-15 15:22:05.250673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:27.153 [2024-07-15 15:22:05.250684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:26:27.153 [2024-07-15 15:22:05.250693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.153 [2024-07-15 15:22:05.250834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.153 [2024-07-15 15:22:05.250852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:27.153 [2024-07-15 15:22:05.250862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:26:27.153 [2024-07-15 15:22:05.250870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.412 [2024-07-15 15:22:05.274305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.412 [2024-07-15 15:22:05.274351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:27.412 [2024-07-15 15:22:05.274365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.455 ms 00:26:27.412 [2024-07-15 15:22:05.274373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.412 [2024-07-15 15:22:05.298233] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:27.412 [2024-07-15 15:22:05.298301] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:27.412 [2024-07-15 15:22:05.298319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.412 [2024-07-15 15:22:05.298329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:27.412 [2024-07-15 15:22:05.298341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.813 ms 00:26:27.412 [2024-07-15 15:22:05.298349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.412 [2024-07-15 15:22:05.331861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.412 [2024-07-15 15:22:05.331964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:27.412 [2024-07-15 15:22:05.331981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.484 ms 00:26:27.412 [2024-07-15 15:22:05.332004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.412 [2024-07-15 15:22:05.355578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.412 [2024-07-15 15:22:05.355639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:27.412 [2024-07-15 15:22:05.355654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.512 ms 00:26:27.412 [2024-07-15 15:22:05.355662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.412 [2024-07-15 15:22:05.379867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.412 [2024-07-15 15:22:05.379930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:27.412 [2024-07-15 15:22:05.379946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.158 ms 00:26:27.412 [2024-07-15 15:22:05.379954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.412 [2024-07-15 15:22:05.381061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.412 [2024-07-15 15:22:05.381099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:27.412 [2024-07-15 15:22:05.381111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:26:27.412 [2024-07-15 15:22:05.381120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.412 [2024-07-15 15:22:05.483777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.412 [2024-07-15 15:22:05.483856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:27.412 [2024-07-15 15:22:05.483871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.820 ms 00:26:27.412 [2024-07-15 15:22:05.483878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.412 [2024-07-15 15:22:05.498692] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:27.412 [2024-07-15 15:22:05.502086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.412 [2024-07-15 15:22:05.502124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:27.412 [2024-07-15 15:22:05.502137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.159 ms 00:26:27.412 [2024-07-15 15:22:05.502145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.412 [2024-07-15 15:22:05.502251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.412 [2024-07-15 15:22:05.502265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:27.412 [2024-07-15 15:22:05.502274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:27.412 [2024-07-15 15:22:05.502281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.412 [2024-07-15 15:22:05.502347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.412 [2024-07-15 15:22:05.502357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:27.412 [2024-07-15 15:22:05.502366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:27.412 [2024-07-15 15:22:05.502373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.412 [2024-07-15 15:22:05.502391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.412 [2024-07-15 15:22:05.502400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:27.412 [2024-07-15 15:22:05.502410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:27.412 [2024-07-15 15:22:05.502418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.412 [2024-07-15 15:22:05.502449] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:27.412 [2024-07-15 15:22:05.502458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.412 [2024-07-15 15:22:05.502466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:27.412 [2024-07-15 15:22:05.502474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:27.412 [2024-07-15 15:22:05.502481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.672 [2024-07-15 15:22:05.543425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.672 [2024-07-15 15:22:05.543492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:27.672 [2024-07-15 15:22:05.543511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.003 ms 00:26:27.672 [2024-07-15 15:22:05.543519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.672 [2024-07-15 15:22:05.543620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.672 [2024-07-15 15:22:05.543630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:27.672 [2024-07-15 15:22:05.543639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:27.672 [2024-07-15 15:22:05.543646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.672 [2024-07-15 15:22:05.544966] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 454.681 ms, result 0 00:27:00.714  Copying: 31/1024 [MB] (31 MBps) Copying: 63/1024 [MB] (32 MBps) Copying: 95/1024 [MB] (31 MBps) Copying: 127/1024 [MB] (32 MBps) Copying: 160/1024 [MB] (32 MBps) Copying: 193/1024 [MB] (32 MBps) Copying: 226/1024 [MB] (33 MBps) Copying: 258/1024 [MB] (32 MBps) Copying: 291/1024 [MB] (32 MBps) Copying: 323/1024 [MB] (32 MBps) Copying: 355/1024 [MB] (31 MBps) Copying: 387/1024 [MB] (32 MBps) Copying: 419/1024 [MB] (31 MBps) Copying: 451/1024 [MB] (32 MBps) Copying: 482/1024 [MB] (30 MBps) Copying: 514/1024 [MB] (31 MBps) Copying: 545/1024 [MB] (31 MBps) Copying: 576/1024 [MB] (31 MBps) Copying: 608/1024 [MB] (31 MBps) Copying: 640/1024 [MB] (32 MBps) Copying: 671/1024 [MB] (30 MBps) Copying: 702/1024 [MB] (31 MBps) Copying: 733/1024 [MB] (30 MBps) Copying: 764/1024 [MB] (31 MBps) Copying: 795/1024 [MB] (31 MBps) Copying: 828/1024 [MB] (32 MBps) Copying: 859/1024 [MB] (31 MBps) Copying: 890/1024 [MB] (30 MBps) Copying: 921/1024 [MB] (31 MBps) Copying: 954/1024 [MB] (33 MBps) Copying: 985/1024 [MB] (31 MBps) Copying: 1017/1024 [MB] (31 MBps) Copying: 1048460/1048576 [kB] (6580 kBps) Copying: 1024/1024 [MB] (average 31 MBps)[2024-07-15 15:22:38.587396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.714 [2024-07-15 15:22:38.587470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:00.714 [2024-07-15 15:22:38.587486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:00.714 [2024-07-15 15:22:38.587495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.714 [2024-07-15 15:22:38.589868] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:00.714 [2024-07-15 15:22:38.597248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.714 [2024-07-15 15:22:38.597282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:00.714 [2024-07-15 15:22:38.597294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.330 ms 00:27:00.714 [2024-07-15 15:22:38.597302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.715 [2024-07-15 15:22:38.606027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.715 [2024-07-15 15:22:38.606068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:00.715 [2024-07-15 15:22:38.606080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.230 ms 00:27:00.715 [2024-07-15 15:22:38.606087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.715 [2024-07-15 15:22:38.628276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.715 [2024-07-15 15:22:38.628339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:00.715 [2024-07-15 15:22:38.628353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.216 ms 00:27:00.715 [2024-07-15 15:22:38.628362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.715 [2024-07-15 15:22:38.633551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.715 [2024-07-15 15:22:38.633580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:00.715 [2024-07-15 15:22:38.633596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.164 ms 00:27:00.715 [2024-07-15 15:22:38.633604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.715 [2024-07-15 15:22:38.672874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.715 [2024-07-15 15:22:38.672943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:00.715 [2024-07-15 15:22:38.672958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.299 ms 00:27:00.715 [2024-07-15 15:22:38.672966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.715 [2024-07-15 15:22:38.695718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.715 [2024-07-15 15:22:38.695770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:00.715 [2024-07-15 15:22:38.695784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.705 ms 00:27:00.715 [2024-07-15 15:22:38.695792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.715 [2024-07-15 15:22:38.797428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.715 [2024-07-15 15:22:38.797530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:00.715 [2024-07-15 15:22:38.797548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.771 ms 00:27:00.715 [2024-07-15 15:22:38.797558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.973 [2024-07-15 15:22:38.838256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.973 [2024-07-15 15:22:38.838311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:00.973 [2024-07-15 15:22:38.838325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.737 ms 00:27:00.973 [2024-07-15 15:22:38.838332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.973 [2024-07-15 15:22:38.875923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.973 [2024-07-15 15:22:38.875982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:00.973 [2024-07-15 15:22:38.876003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.613 ms 00:27:00.973 [2024-07-15 15:22:38.876027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.973 [2024-07-15 15:22:38.918074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.973 [2024-07-15 15:22:38.918156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:00.973 [2024-07-15 15:22:38.918180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.037 ms 00:27:00.973 [2024-07-15 15:22:38.918194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.973 [2024-07-15 15:22:38.961172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.973 [2024-07-15 15:22:38.961219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:00.973 [2024-07-15 15:22:38.961232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.899 ms 00:27:00.973 [2024-07-15 15:22:38.961240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.973 [2024-07-15 15:22:38.961288] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:00.973 [2024-07-15 15:22:38.961303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103680 / 261120 wr_cnt: 1 state: open 00:27:00.973 [2024-07-15 15:22:38.961313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:00.973 [2024-07-15 15:22:38.961321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:00.973 [2024-07-15 15:22:38.961329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:00.973 [2024-07-15 15:22:38.961338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.961993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:00.974 [2024-07-15 15:22:38.962347] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:00.974 [2024-07-15 15:22:38.962356] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3e1764c2-857f-4f54-b9e9-71a481f2125b 00:27:00.974 [2024-07-15 15:22:38.962364] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103680 00:27:00.974 [2024-07-15 15:22:38.962375] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104640 00:27:00.974 [2024-07-15 15:22:38.962383] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103680 00:27:00.974 [2024-07-15 15:22:38.962396] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:27:00.974 [2024-07-15 15:22:38.962404] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:00.974 [2024-07-15 15:22:38.962411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:00.974 [2024-07-15 15:22:38.962419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:00.974 [2024-07-15 15:22:38.962426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:00.974 [2024-07-15 15:22:38.962433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:00.974 [2024-07-15 15:22:38.962442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.974 [2024-07-15 15:22:38.962451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:00.974 [2024-07-15 15:22:38.962491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:27:00.974 [2024-07-15 15:22:38.962500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.974 [2024-07-15 15:22:38.985020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.974 [2024-07-15 15:22:38.985061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:00.974 [2024-07-15 15:22:38.985073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.509 ms 00:27:00.974 [2024-07-15 15:22:38.985080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.974 [2024-07-15 15:22:38.985596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.974 [2024-07-15 15:22:38.985604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:00.974 [2024-07-15 15:22:38.985612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:27:00.974 [2024-07-15 15:22:38.985619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.974 [2024-07-15 15:22:39.031637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.974 [2024-07-15 15:22:39.031679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:00.974 [2024-07-15 15:22:39.031691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.974 [2024-07-15 15:22:39.031699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.974 [2024-07-15 15:22:39.031763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.974 [2024-07-15 15:22:39.031771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:00.974 [2024-07-15 15:22:39.031779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.974 [2024-07-15 15:22:39.031786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.974 [2024-07-15 15:22:39.031857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.974 [2024-07-15 15:22:39.031869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:00.974 [2024-07-15 15:22:39.031877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.974 [2024-07-15 15:22:39.031885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.974 [2024-07-15 15:22:39.031900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.974 [2024-07-15 15:22:39.031908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:00.974 [2024-07-15 15:22:39.031915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.974 [2024-07-15 15:22:39.031922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.232 [2024-07-15 15:22:39.154362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.233 [2024-07-15 15:22:39.154417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:01.233 [2024-07-15 15:22:39.154430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.233 [2024-07-15 15:22:39.154439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.233 [2024-07-15 15:22:39.255564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.233 [2024-07-15 15:22:39.255626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:01.233 [2024-07-15 15:22:39.255639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.233 [2024-07-15 15:22:39.255647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.233 [2024-07-15 15:22:39.255714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.233 [2024-07-15 15:22:39.255728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:01.233 [2024-07-15 15:22:39.255736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.233 [2024-07-15 15:22:39.255742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.233 [2024-07-15 15:22:39.255775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.233 [2024-07-15 15:22:39.255783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:01.233 [2024-07-15 15:22:39.255790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.233 [2024-07-15 15:22:39.255797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.233 [2024-07-15 15:22:39.255893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.233 [2024-07-15 15:22:39.255903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:01.233 [2024-07-15 15:22:39.255914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.233 [2024-07-15 15:22:39.255921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.233 [2024-07-15 15:22:39.255951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.233 [2024-07-15 15:22:39.255960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:01.233 [2024-07-15 15:22:39.255967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.233 [2024-07-15 15:22:39.255974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.233 [2024-07-15 15:22:39.256029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.233 [2024-07-15 15:22:39.256038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:01.233 [2024-07-15 15:22:39.256049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.233 [2024-07-15 15:22:39.256056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.233 [2024-07-15 15:22:39.256098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.233 [2024-07-15 15:22:39.256125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:01.233 [2024-07-15 15:22:39.256132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.233 [2024-07-15 15:22:39.256139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.233 [2024-07-15 15:22:39.256271] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 671.906 ms, result 0 00:27:03.155 00:27:03.155 00:27:03.155 15:22:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:05.059 15:22:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:05.059 [2024-07-15 15:22:42.925763] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:27:05.059 [2024-07-15 15:22:42.925888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85294 ] 00:27:05.059 [2024-07-15 15:22:43.087771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.326 [2024-07-15 15:22:43.315318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.900 [2024-07-15 15:22:43.702786] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:05.900 [2024-07-15 15:22:43.702859] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:05.900 [2024-07-15 15:22:43.859577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.900 [2024-07-15 15:22:43.859628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:05.900 [2024-07-15 15:22:43.859641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:05.900 [2024-07-15 15:22:43.859649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-07-15 15:22:43.859705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.900 [2024-07-15 15:22:43.859716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:05.900 [2024-07-15 15:22:43.859725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:05.900 [2024-07-15 15:22:43.859735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-07-15 15:22:43.859754] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:05.900 [2024-07-15 15:22:43.860875] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:05.900 [2024-07-15 15:22:43.860898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.900 [2024-07-15 15:22:43.860909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:05.900 [2024-07-15 15:22:43.860917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:27:05.900 [2024-07-15 15:22:43.860924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-07-15 15:22:43.862319] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:05.900 [2024-07-15 15:22:43.882116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.900 [2024-07-15 15:22:43.882157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:05.900 [2024-07-15 15:22:43.882169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.836 ms 00:27:05.900 [2024-07-15 15:22:43.882176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-07-15 15:22:43.882243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.900 [2024-07-15 15:22:43.882254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:05.900 [2024-07-15 15:22:43.882265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:05.900 [2024-07-15 15:22:43.882272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-07-15 15:22:43.889241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.900 [2024-07-15 15:22:43.889275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:05.900 [2024-07-15 15:22:43.889285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.922 ms 00:27:05.900 [2024-07-15 15:22:43.889293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-07-15 15:22:43.889372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.900 [2024-07-15 15:22:43.889388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:05.900 [2024-07-15 15:22:43.889397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:05.900 [2024-07-15 15:22:43.889404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-07-15 15:22:43.889450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.900 [2024-07-15 15:22:43.889459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:05.900 [2024-07-15 15:22:43.889468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:05.900 [2024-07-15 15:22:43.889476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-07-15 15:22:43.889500] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:05.900 [2024-07-15 15:22:43.895130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.900 [2024-07-15 15:22:43.895160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:05.900 [2024-07-15 15:22:43.895169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.648 ms 00:27:05.900 [2024-07-15 15:22:43.895177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-07-15 15:22:43.895211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.900 [2024-07-15 15:22:43.895219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:05.900 [2024-07-15 15:22:43.895227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:05.900 [2024-07-15 15:22:43.895234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-07-15 15:22:43.895278] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:05.900 [2024-07-15 15:22:43.895299] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:05.900 [2024-07-15 15:22:43.895334] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:05.900 [2024-07-15 15:22:43.895351] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:05.900 [2024-07-15 15:22:43.895435] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:05.900 [2024-07-15 15:22:43.895446] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:05.900 [2024-07-15 15:22:43.895456] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:05.900 [2024-07-15 15:22:43.895466] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:05.900 [2024-07-15 15:22:43.895474] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:05.900 [2024-07-15 15:22:43.895483] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:05.900 [2024-07-15 15:22:43.895491] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:05.900 [2024-07-15 15:22:43.895499] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:05.900 [2024-07-15 15:22:43.895506] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:05.900 [2024-07-15 15:22:43.895514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.900 [2024-07-15 15:22:43.895524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:05.900 [2024-07-15 15:22:43.895531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:27:05.900 [2024-07-15 15:22:43.895538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-07-15 15:22:43.895605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.900 [2024-07-15 15:22:43.895613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:05.900 [2024-07-15 15:22:43.895620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:05.900 [2024-07-15 15:22:43.895627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-07-15 15:22:43.895709] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:05.900 [2024-07-15 15:22:43.895719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:05.901 [2024-07-15 15:22:43.895730] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:05.901 [2024-07-15 15:22:43.895738] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.901 [2024-07-15 15:22:43.895746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:05.901 [2024-07-15 15:22:43.895754] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:05.901 [2024-07-15 15:22:43.895760] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:05.901 [2024-07-15 15:22:43.895768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:05.901 [2024-07-15 15:22:43.895776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:05.901 [2024-07-15 15:22:43.895783] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:05.901 [2024-07-15 15:22:43.895789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:05.901 [2024-07-15 15:22:43.895796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:05.901 [2024-07-15 15:22:43.895802] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:05.901 [2024-07-15 15:22:43.895809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:05.901 [2024-07-15 15:22:43.895815] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:05.901 [2024-07-15 15:22:43.895821] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.901 [2024-07-15 15:22:43.895828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:05.901 [2024-07-15 15:22:43.895835] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:05.901 [2024-07-15 15:22:43.895843] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.901 [2024-07-15 15:22:43.895849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:05.901 [2024-07-15 15:22:43.895869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:05.901 [2024-07-15 15:22:43.895876] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:05.901 [2024-07-15 15:22:43.895884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:05.901 [2024-07-15 15:22:43.895891] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:05.901 [2024-07-15 15:22:43.895898] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:05.901 [2024-07-15 15:22:43.895904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:05.901 [2024-07-15 15:22:43.895911] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:05.901 [2024-07-15 15:22:43.895917] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:05.901 [2024-07-15 15:22:43.895923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:05.901 [2024-07-15 15:22:43.895930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:05.901 [2024-07-15 15:22:43.895936] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:05.901 [2024-07-15 15:22:43.895943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:05.901 [2024-07-15 15:22:43.895949] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:05.901 [2024-07-15 15:22:43.895955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:05.901 [2024-07-15 15:22:43.895963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:05.901 [2024-07-15 15:22:43.895970] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:05.901 [2024-07-15 15:22:43.895976] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:05.901 [2024-07-15 15:22:43.895982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:05.901 [2024-07-15 15:22:43.896004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:05.901 [2024-07-15 15:22:43.896011] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.901 [2024-07-15 15:22:43.896019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:05.901 [2024-07-15 15:22:43.896026] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:05.901 [2024-07-15 15:22:43.896033] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.901 [2024-07-15 15:22:43.896039] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:05.901 [2024-07-15 15:22:43.896047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:05.901 [2024-07-15 15:22:43.896055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:05.901 [2024-07-15 15:22:43.896062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.901 [2024-07-15 15:22:43.896070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:05.901 [2024-07-15 15:22:43.896076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:05.901 [2024-07-15 15:22:43.896083] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:05.901 [2024-07-15 15:22:43.896090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:05.901 [2024-07-15 15:22:43.896097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:05.901 [2024-07-15 15:22:43.896103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:05.901 [2024-07-15 15:22:43.896111] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:05.901 [2024-07-15 15:22:43.896139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:05.901 [2024-07-15 15:22:43.896148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:05.901 [2024-07-15 15:22:43.896156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:05.901 [2024-07-15 15:22:43.896163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:05.901 [2024-07-15 15:22:43.896170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:05.901 [2024-07-15 15:22:43.896178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:05.901 [2024-07-15 15:22:43.896185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:05.901 [2024-07-15 15:22:43.896192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:05.901 [2024-07-15 15:22:43.896200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:05.901 [2024-07-15 15:22:43.896207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:05.901 [2024-07-15 15:22:43.896225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:05.901 [2024-07-15 15:22:43.896233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:05.901 [2024-07-15 15:22:43.896241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:05.901 [2024-07-15 15:22:43.896248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:05.901 [2024-07-15 15:22:43.896256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:05.901 [2024-07-15 15:22:43.896262] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:05.901 [2024-07-15 15:22:43.896270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:05.901 [2024-07-15 15:22:43.896278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:05.901 [2024-07-15 15:22:43.896286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:05.901 [2024-07-15 15:22:43.896293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:05.901 [2024-07-15 15:22:43.896300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:05.901 [2024-07-15 15:22:43.896308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.901 [2024-07-15 15:22:43.896319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:05.901 [2024-07-15 15:22:43.896328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:27:05.901 [2024-07-15 15:22:43.896335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.901 [2024-07-15 15:22:43.948847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.901 [2024-07-15 15:22:43.948900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:05.901 [2024-07-15 15:22:43.948913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.561 ms 00:27:05.901 [2024-07-15 15:22:43.948920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.901 [2024-07-15 15:22:43.949033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.901 [2024-07-15 15:22:43.949042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:05.901 [2024-07-15 15:22:43.949050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:05.901 [2024-07-15 15:22:43.949057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.901 [2024-07-15 15:22:43.999049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.901 [2024-07-15 15:22:43.999094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:05.901 [2024-07-15 15:22:43.999107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.001 ms 00:27:05.901 [2024-07-15 15:22:43.999116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.901 [2024-07-15 15:22:43.999177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.901 [2024-07-15 15:22:43.999187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:05.901 [2024-07-15 15:22:43.999195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:05.901 [2024-07-15 15:22:43.999204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.901 [2024-07-15 15:22:43.999679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.901 [2024-07-15 15:22:43.999690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:05.901 [2024-07-15 15:22:43.999699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:27:05.901 [2024-07-15 15:22:43.999706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.901 [2024-07-15 15:22:43.999818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.901 [2024-07-15 15:22:43.999830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:05.901 [2024-07-15 15:22:43.999839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:27:05.901 [2024-07-15 15:22:43.999846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-07-15 15:22:44.019872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.160 [2024-07-15 15:22:44.019912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:06.160 [2024-07-15 15:22:44.019925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.042 ms 00:27:06.160 [2024-07-15 15:22:44.019932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-07-15 15:22:44.039736] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:06.160 [2024-07-15 15:22:44.039805] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:06.160 [2024-07-15 15:22:44.039818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.160 [2024-07-15 15:22:44.039827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:06.160 [2024-07-15 15:22:44.039852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.790 ms 00:27:06.160 [2024-07-15 15:22:44.039860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-07-15 15:22:44.072753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.160 [2024-07-15 15:22:44.072802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:06.160 [2024-07-15 15:22:44.072815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.905 ms 00:27:06.160 [2024-07-15 15:22:44.072830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-07-15 15:22:44.093478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.160 [2024-07-15 15:22:44.093524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:06.160 [2024-07-15 15:22:44.093536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.632 ms 00:27:06.160 [2024-07-15 15:22:44.093545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-07-15 15:22:44.115348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.160 [2024-07-15 15:22:44.115392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:06.160 [2024-07-15 15:22:44.115407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.777 ms 00:27:06.160 [2024-07-15 15:22:44.115416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-07-15 15:22:44.116328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.160 [2024-07-15 15:22:44.116363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:06.160 [2024-07-15 15:22:44.116375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:27:06.160 [2024-07-15 15:22:44.116383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-07-15 15:22:44.214023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.160 [2024-07-15 15:22:44.214100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:06.160 [2024-07-15 15:22:44.214116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.798 ms 00:27:06.160 [2024-07-15 15:22:44.214125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-07-15 15:22:44.228337] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:06.160 [2024-07-15 15:22:44.231650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.160 [2024-07-15 15:22:44.231686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:06.160 [2024-07-15 15:22:44.231699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.468 ms 00:27:06.160 [2024-07-15 15:22:44.231723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-07-15 15:22:44.231850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.160 [2024-07-15 15:22:44.231861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:06.160 [2024-07-15 15:22:44.231869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:06.160 [2024-07-15 15:22:44.231877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-07-15 15:22:44.233346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.160 [2024-07-15 15:22:44.233384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:06.160 [2024-07-15 15:22:44.233394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:27:06.160 [2024-07-15 15:22:44.233401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-07-15 15:22:44.233431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.160 [2024-07-15 15:22:44.233439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:06.160 [2024-07-15 15:22:44.233447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:06.160 [2024-07-15 15:22:44.233455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-07-15 15:22:44.233492] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:06.160 [2024-07-15 15:22:44.233502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.160 [2024-07-15 15:22:44.233509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:06.160 [2024-07-15 15:22:44.233520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:06.160 [2024-07-15 15:22:44.233527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.418 [2024-07-15 15:22:44.273609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.418 [2024-07-15 15:22:44.273658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:06.418 [2024-07-15 15:22:44.273672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.124 ms 00:27:06.418 [2024-07-15 15:22:44.273681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.418 [2024-07-15 15:22:44.273764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.418 [2024-07-15 15:22:44.273783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:06.418 [2024-07-15 15:22:44.273793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:06.418 [2024-07-15 15:22:44.273800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.418 [2024-07-15 15:22:44.280192] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 419.987 ms, result 0 00:27:36.273  Copying: 984/1048576 [kB] (984 kBps) Copying: 5084/1048576 [kB] (4100 kBps) Copying: 38/1024 [MB] (33 MBps) Copying: 76/1024 [MB] (37 MBps) Copying: 113/1024 [MB] (37 MBps) Copying: 151/1024 [MB] (37 MBps) Copying: 190/1024 [MB] (38 MBps) Copying: 227/1024 [MB] (37 MBps) Copying: 265/1024 [MB] (37 MBps) Copying: 302/1024 [MB] (37 MBps) Copying: 338/1024 [MB] (35 MBps) Copying: 375/1024 [MB] (37 MBps) Copying: 412/1024 [MB] (36 MBps) Copying: 449/1024 [MB] (36 MBps) Copying: 489/1024 [MB] (39 MBps) Copying: 526/1024 [MB] (37 MBps) Copying: 564/1024 [MB] (37 MBps) Copying: 602/1024 [MB] (38 MBps) Copying: 640/1024 [MB] (37 MBps) Copying: 677/1024 [MB] (37 MBps) Copying: 715/1024 [MB] (37 MBps) Copying: 751/1024 [MB] (36 MBps) Copying: 788/1024 [MB] (37 MBps) Copying: 825/1024 [MB] (36 MBps) Copying: 863/1024 [MB] (38 MBps) Copying: 902/1024 [MB] (38 MBps) Copying: 941/1024 [MB] (38 MBps) Copying: 979/1024 [MB] (38 MBps) Copying: 1017/1024 [MB] (37 MBps) Copying: 1024/1024 [MB] (average 35 MBps)[2024-07-15 15:23:14.079671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.273 [2024-07-15 15:23:14.079773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:36.273 [2024-07-15 15:23:14.079797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:36.273 [2024-07-15 15:23:14.079812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.273 [2024-07-15 15:23:14.079864] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:36.273 [2024-07-15 15:23:14.084576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.273 [2024-07-15 15:23:14.084621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:36.273 [2024-07-15 15:23:14.084634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.694 ms 00:27:36.273 [2024-07-15 15:23:14.084641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.273 [2024-07-15 15:23:14.084857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.273 [2024-07-15 15:23:14.084866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:36.273 [2024-07-15 15:23:14.084876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:27:36.273 [2024-07-15 15:23:14.084884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.273 [2024-07-15 15:23:14.097706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.273 [2024-07-15 15:23:14.097767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:36.273 [2024-07-15 15:23:14.097782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.822 ms 00:27:36.273 [2024-07-15 15:23:14.097792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.273 [2024-07-15 15:23:14.104049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.273 [2024-07-15 15:23:14.104106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:36.273 [2024-07-15 15:23:14.104116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.231 ms 00:27:36.273 [2024-07-15 15:23:14.104125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.273 [2024-07-15 15:23:14.150917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.273 [2024-07-15 15:23:14.151010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:36.273 [2024-07-15 15:23:14.151029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.784 ms 00:27:36.273 [2024-07-15 15:23:14.151041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.273 [2024-07-15 15:23:14.173203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.273 [2024-07-15 15:23:14.173282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:36.273 [2024-07-15 15:23:14.173299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.127 ms 00:27:36.273 [2024-07-15 15:23:14.173310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.273 [2024-07-15 15:23:14.176623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.273 [2024-07-15 15:23:14.176873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:36.273 [2024-07-15 15:23:14.176890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.270 ms 00:27:36.273 [2024-07-15 15:23:14.176902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.273 [2024-07-15 15:23:14.220038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.273 [2024-07-15 15:23:14.220098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:36.273 [2024-07-15 15:23:14.220111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.191 ms 00:27:36.273 [2024-07-15 15:23:14.220135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.273 [2024-07-15 15:23:14.260819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.273 [2024-07-15 15:23:14.260876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:36.273 [2024-07-15 15:23:14.260889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.686 ms 00:27:36.273 [2024-07-15 15:23:14.260897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.273 [2024-07-15 15:23:14.299878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.273 [2024-07-15 15:23:14.299966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:36.273 [2024-07-15 15:23:14.299984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.992 ms 00:27:36.273 [2024-07-15 15:23:14.300031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.273 [2024-07-15 15:23:14.334766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.273 [2024-07-15 15:23:14.334839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:36.273 [2024-07-15 15:23:14.334858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.608 ms 00:27:36.273 [2024-07-15 15:23:14.334868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.273 [2024-07-15 15:23:14.334932] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:36.273 [2024-07-15 15:23:14.334953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:36.273 [2024-07-15 15:23:14.334967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:27:36.273 [2024-07-15 15:23:14.334979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:36.273 [2024-07-15 15:23:14.335307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.335979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:36.274 [2024-07-15 15:23:14.336184] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:36.274 [2024-07-15 15:23:14.336194] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3e1764c2-857f-4f54-b9e9-71a481f2125b 00:27:36.274 [2024-07-15 15:23:14.336206] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:27:36.274 [2024-07-15 15:23:14.336215] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 163264 00:27:36.274 [2024-07-15 15:23:14.336225] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 161280 00:27:36.274 [2024-07-15 15:23:14.336245] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0123 00:27:36.274 [2024-07-15 15:23:14.336255] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:36.274 [2024-07-15 15:23:14.336270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:36.274 [2024-07-15 15:23:14.336282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:36.274 [2024-07-15 15:23:14.336292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:36.274 [2024-07-15 15:23:14.336302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:36.274 [2024-07-15 15:23:14.336314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.274 [2024-07-15 15:23:14.336326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:36.274 [2024-07-15 15:23:14.336338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:27:36.274 [2024-07-15 15:23:14.336349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.274 [2024-07-15 15:23:14.355462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.274 [2024-07-15 15:23:14.355527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:36.274 [2024-07-15 15:23:14.355544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.084 ms 00:27:36.274 [2024-07-15 15:23:14.355575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.274 [2024-07-15 15:23:14.356111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.274 [2024-07-15 15:23:14.356131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:36.274 [2024-07-15 15:23:14.356145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:27:36.274 [2024-07-15 15:23:14.356157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.534 [2024-07-15 15:23:14.400089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.534 [2024-07-15 15:23:14.400155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:36.534 [2024-07-15 15:23:14.400176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.534 [2024-07-15 15:23:14.400184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.534 [2024-07-15 15:23:14.400258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.534 [2024-07-15 15:23:14.400266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:36.534 [2024-07-15 15:23:14.400274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.534 [2024-07-15 15:23:14.400281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.534 [2024-07-15 15:23:14.400374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.534 [2024-07-15 15:23:14.400386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:36.534 [2024-07-15 15:23:14.400394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.534 [2024-07-15 15:23:14.400405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.534 [2024-07-15 15:23:14.400422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.534 [2024-07-15 15:23:14.400431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:36.534 [2024-07-15 15:23:14.400438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.534 [2024-07-15 15:23:14.400446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.534 [2024-07-15 15:23:14.532453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.534 [2024-07-15 15:23:14.532522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:36.534 [2024-07-15 15:23:14.532543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.534 [2024-07-15 15:23:14.532552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.793 [2024-07-15 15:23:14.645867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.793 [2024-07-15 15:23:14.645936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:36.793 [2024-07-15 15:23:14.645950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.793 [2024-07-15 15:23:14.645960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.793 [2024-07-15 15:23:14.646056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.793 [2024-07-15 15:23:14.646082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:36.793 [2024-07-15 15:23:14.646091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.793 [2024-07-15 15:23:14.646099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.793 [2024-07-15 15:23:14.646141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.793 [2024-07-15 15:23:14.646151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:36.793 [2024-07-15 15:23:14.646173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.793 [2024-07-15 15:23:14.646182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.793 [2024-07-15 15:23:14.646313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.793 [2024-07-15 15:23:14.646331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:36.793 [2024-07-15 15:23:14.646340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.793 [2024-07-15 15:23:14.646349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.793 [2024-07-15 15:23:14.646390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.793 [2024-07-15 15:23:14.646402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:36.793 [2024-07-15 15:23:14.646410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.793 [2024-07-15 15:23:14.646418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.793 [2024-07-15 15:23:14.646455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.793 [2024-07-15 15:23:14.646465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:36.793 [2024-07-15 15:23:14.646474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.793 [2024-07-15 15:23:14.646481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.793 [2024-07-15 15:23:14.646530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.793 [2024-07-15 15:23:14.646540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:36.793 [2024-07-15 15:23:14.646549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.793 [2024-07-15 15:23:14.646564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.793 [2024-07-15 15:23:14.646694] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 568.095 ms, result 0 00:27:38.171 00:27:38.171 00:27:38.171 15:23:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:40.075 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:40.075 15:23:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:40.075 [2024-07-15 15:23:17.790388] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:27:40.075 [2024-07-15 15:23:17.790513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85642 ] 00:27:40.075 [2024-07-15 15:23:17.955521] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.333 [2024-07-15 15:23:18.191525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.591 [2024-07-15 15:23:18.592522] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:40.591 [2024-07-15 15:23:18.592675] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:40.849 [2024-07-15 15:23:18.750210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.849 [2024-07-15 15:23:18.750339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:40.849 [2024-07-15 15:23:18.750373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:40.849 [2024-07-15 15:23:18.750395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.849 [2024-07-15 15:23:18.750479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.849 [2024-07-15 15:23:18.750535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:40.849 [2024-07-15 15:23:18.750612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:40.849 [2024-07-15 15:23:18.750657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.849 [2024-07-15 15:23:18.750688] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:40.849 [2024-07-15 15:23:18.752132] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:40.849 [2024-07-15 15:23:18.752166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.849 [2024-07-15 15:23:18.752178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:40.849 [2024-07-15 15:23:18.752188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:27:40.849 [2024-07-15 15:23:18.752196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.849 [2024-07-15 15:23:18.753691] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:40.849 [2024-07-15 15:23:18.777018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.849 [2024-07-15 15:23:18.777088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:40.849 [2024-07-15 15:23:18.777104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.370 ms 00:27:40.849 [2024-07-15 15:23:18.777113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.849 [2024-07-15 15:23:18.777220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.849 [2024-07-15 15:23:18.777233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:40.849 [2024-07-15 15:23:18.777258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:40.849 [2024-07-15 15:23:18.777266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.849 [2024-07-15 15:23:18.784824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.849 [2024-07-15 15:23:18.784857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:40.849 [2024-07-15 15:23:18.784867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.482 ms 00:27:40.849 [2024-07-15 15:23:18.784876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.849 [2024-07-15 15:23:18.784957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.849 [2024-07-15 15:23:18.784976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:40.849 [2024-07-15 15:23:18.784984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:40.849 [2024-07-15 15:23:18.785004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.849 [2024-07-15 15:23:18.785072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.849 [2024-07-15 15:23:18.785082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:40.849 [2024-07-15 15:23:18.785098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:40.849 [2024-07-15 15:23:18.785106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.849 [2024-07-15 15:23:18.785131] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:40.849 [2024-07-15 15:23:18.790706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.849 [2024-07-15 15:23:18.790738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:40.849 [2024-07-15 15:23:18.790750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.592 ms 00:27:40.849 [2024-07-15 15:23:18.790758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.849 [2024-07-15 15:23:18.790797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.849 [2024-07-15 15:23:18.790807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:40.849 [2024-07-15 15:23:18.790816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:40.849 [2024-07-15 15:23:18.790825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.849 [2024-07-15 15:23:18.790877] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:40.849 [2024-07-15 15:23:18.790900] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:40.849 [2024-07-15 15:23:18.790936] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:40.849 [2024-07-15 15:23:18.790954] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:40.849 [2024-07-15 15:23:18.791057] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:40.849 [2024-07-15 15:23:18.791070] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:40.849 [2024-07-15 15:23:18.791081] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:40.849 [2024-07-15 15:23:18.791091] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:40.849 [2024-07-15 15:23:18.791101] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:40.849 [2024-07-15 15:23:18.791110] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:40.849 [2024-07-15 15:23:18.791118] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:40.849 [2024-07-15 15:23:18.791126] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:40.849 [2024-07-15 15:23:18.791135] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:40.849 [2024-07-15 15:23:18.791144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.849 [2024-07-15 15:23:18.791157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:40.849 [2024-07-15 15:23:18.791166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:27:40.849 [2024-07-15 15:23:18.791174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.849 [2024-07-15 15:23:18.791251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.849 [2024-07-15 15:23:18.791261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:40.849 [2024-07-15 15:23:18.791269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:40.849 [2024-07-15 15:23:18.791278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.849 [2024-07-15 15:23:18.791374] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:40.849 [2024-07-15 15:23:18.791385] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:40.849 [2024-07-15 15:23:18.791397] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:40.849 [2024-07-15 15:23:18.791407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.849 [2024-07-15 15:23:18.791416] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:40.849 [2024-07-15 15:23:18.791424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:40.849 [2024-07-15 15:23:18.791431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:40.849 [2024-07-15 15:23:18.791439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:40.849 [2024-07-15 15:23:18.791446] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:40.850 [2024-07-15 15:23:18.791454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:40.850 [2024-07-15 15:23:18.791461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:40.850 [2024-07-15 15:23:18.791468] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:40.850 [2024-07-15 15:23:18.791476] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:40.850 [2024-07-15 15:23:18.791483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:40.850 [2024-07-15 15:23:18.791491] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:40.850 [2024-07-15 15:23:18.791499] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.850 [2024-07-15 15:23:18.791506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:40.850 [2024-07-15 15:23:18.791514] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:40.850 [2024-07-15 15:23:18.791521] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.850 [2024-07-15 15:23:18.791529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:40.850 [2024-07-15 15:23:18.791551] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:40.850 [2024-07-15 15:23:18.791560] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.850 [2024-07-15 15:23:18.791568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:40.850 [2024-07-15 15:23:18.791575] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:40.850 [2024-07-15 15:23:18.791584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.850 [2024-07-15 15:23:18.791591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:40.850 [2024-07-15 15:23:18.791599] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:40.850 [2024-07-15 15:23:18.791606] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.850 [2024-07-15 15:23:18.791613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:40.850 [2024-07-15 15:23:18.791621] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:40.850 [2024-07-15 15:23:18.791629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.850 [2024-07-15 15:23:18.791636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:40.850 [2024-07-15 15:23:18.791644] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:40.850 [2024-07-15 15:23:18.791651] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:40.850 [2024-07-15 15:23:18.791659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:40.850 [2024-07-15 15:23:18.791666] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:40.850 [2024-07-15 15:23:18.791673] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:40.850 [2024-07-15 15:23:18.791680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:40.850 [2024-07-15 15:23:18.791688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:40.850 [2024-07-15 15:23:18.791695] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.850 [2024-07-15 15:23:18.791702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:40.850 [2024-07-15 15:23:18.791709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:40.850 [2024-07-15 15:23:18.791716] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.850 [2024-07-15 15:23:18.791723] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:40.850 [2024-07-15 15:23:18.791732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:40.850 [2024-07-15 15:23:18.791740] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:40.850 [2024-07-15 15:23:18.791747] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.850 [2024-07-15 15:23:18.791756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:40.850 [2024-07-15 15:23:18.791763] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:40.850 [2024-07-15 15:23:18.791771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:40.850 [2024-07-15 15:23:18.791779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:40.850 [2024-07-15 15:23:18.791787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:40.850 [2024-07-15 15:23:18.791794] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:40.850 [2024-07-15 15:23:18.791803] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:40.850 [2024-07-15 15:23:18.791814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:40.850 [2024-07-15 15:23:18.791823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:40.850 [2024-07-15 15:23:18.791831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:40.850 [2024-07-15 15:23:18.791839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:40.850 [2024-07-15 15:23:18.791859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:40.850 [2024-07-15 15:23:18.791867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:40.850 [2024-07-15 15:23:18.791875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:40.850 [2024-07-15 15:23:18.791882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:40.850 [2024-07-15 15:23:18.791890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:40.850 [2024-07-15 15:23:18.791897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:40.850 [2024-07-15 15:23:18.791904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:40.850 [2024-07-15 15:23:18.791911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:40.850 [2024-07-15 15:23:18.791919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:40.850 [2024-07-15 15:23:18.791926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:40.850 [2024-07-15 15:23:18.791951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:40.850 [2024-07-15 15:23:18.791959] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:40.850 [2024-07-15 15:23:18.791967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:40.850 [2024-07-15 15:23:18.791976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:40.850 [2024-07-15 15:23:18.791984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:40.850 [2024-07-15 15:23:18.791992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:40.850 [2024-07-15 15:23:18.792000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:40.850 [2024-07-15 15:23:18.792009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.850 [2024-07-15 15:23:18.792033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:40.850 [2024-07-15 15:23:18.792042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:27:40.850 [2024-07-15 15:23:18.792050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.850 [2024-07-15 15:23:18.852491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.850 [2024-07-15 15:23:18.852547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:40.850 [2024-07-15 15:23:18.852560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.500 ms 00:27:40.850 [2024-07-15 15:23:18.852568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.850 [2024-07-15 15:23:18.852668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.850 [2024-07-15 15:23:18.852677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:40.850 [2024-07-15 15:23:18.852685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:40.850 [2024-07-15 15:23:18.852691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.850 [2024-07-15 15:23:18.903330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.850 [2024-07-15 15:23:18.903387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:40.850 [2024-07-15 15:23:18.903400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.670 ms 00:27:40.850 [2024-07-15 15:23:18.903408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.850 [2024-07-15 15:23:18.903472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.850 [2024-07-15 15:23:18.903481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:40.850 [2024-07-15 15:23:18.903489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:40.850 [2024-07-15 15:23:18.903497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.850 [2024-07-15 15:23:18.903956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.850 [2024-07-15 15:23:18.903968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:40.850 [2024-07-15 15:23:18.903976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:27:40.850 [2024-07-15 15:23:18.903983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.850 [2024-07-15 15:23:18.904113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.850 [2024-07-15 15:23:18.904127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:40.850 [2024-07-15 15:23:18.904135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:27:40.850 [2024-07-15 15:23:18.904143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.850 [2024-07-15 15:23:18.923824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.850 [2024-07-15 15:23:18.923870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:40.850 [2024-07-15 15:23:18.923883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.693 ms 00:27:40.850 [2024-07-15 15:23:18.923891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.850 [2024-07-15 15:23:18.945610] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:40.850 [2024-07-15 15:23:18.945654] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:40.850 [2024-07-15 15:23:18.945669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.850 [2024-07-15 15:23:18.945679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:40.850 [2024-07-15 15:23:18.945690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.683 ms 00:27:40.850 [2024-07-15 15:23:18.945698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.111 [2024-07-15 15:23:18.982377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.111 [2024-07-15 15:23:18.982429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:41.111 [2024-07-15 15:23:18.982444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.696 ms 00:27:41.111 [2024-07-15 15:23:18.982460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.111 [2024-07-15 15:23:19.002888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.111 [2024-07-15 15:23:19.002928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:41.111 [2024-07-15 15:23:19.002940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.397 ms 00:27:41.111 [2024-07-15 15:23:19.002948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.111 [2024-07-15 15:23:19.024618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.111 [2024-07-15 15:23:19.024667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:41.111 [2024-07-15 15:23:19.024679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.657 ms 00:27:41.111 [2024-07-15 15:23:19.024686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.111 [2024-07-15 15:23:19.025644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.111 [2024-07-15 15:23:19.025674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:41.111 [2024-07-15 15:23:19.025685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:27:41.111 [2024-07-15 15:23:19.025693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.111 [2024-07-15 15:23:19.118091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.111 [2024-07-15 15:23:19.118161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:41.111 [2024-07-15 15:23:19.118176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.551 ms 00:27:41.111 [2024-07-15 15:23:19.118184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.111 [2024-07-15 15:23:19.132267] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:41.111 [2024-07-15 15:23:19.135519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.111 [2024-07-15 15:23:19.135559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:41.111 [2024-07-15 15:23:19.135573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.281 ms 00:27:41.111 [2024-07-15 15:23:19.135582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.111 [2024-07-15 15:23:19.135692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.111 [2024-07-15 15:23:19.135704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:41.111 [2024-07-15 15:23:19.135713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:41.111 [2024-07-15 15:23:19.135722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.111 [2024-07-15 15:23:19.136570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.111 [2024-07-15 15:23:19.136597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:41.111 [2024-07-15 15:23:19.136608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:27:41.111 [2024-07-15 15:23:19.136616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.111 [2024-07-15 15:23:19.136644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.111 [2024-07-15 15:23:19.136653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:41.111 [2024-07-15 15:23:19.136662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:41.111 [2024-07-15 15:23:19.136670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.111 [2024-07-15 15:23:19.136704] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:41.111 [2024-07-15 15:23:19.136714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.111 [2024-07-15 15:23:19.136723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:41.111 [2024-07-15 15:23:19.136734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:41.111 [2024-07-15 15:23:19.136742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.111 [2024-07-15 15:23:19.181701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.111 [2024-07-15 15:23:19.181756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:41.111 [2024-07-15 15:23:19.181770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.024 ms 00:27:41.111 [2024-07-15 15:23:19.181777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.111 [2024-07-15 15:23:19.181864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.111 [2024-07-15 15:23:19.181883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:41.111 [2024-07-15 15:23:19.181891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:41.111 [2024-07-15 15:23:19.181898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.111 [2024-07-15 15:23:19.183184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 433.258 ms, result 0 00:28:10.135  Copying: 36/1024 [MB] (36 MBps) Copying: 71/1024 [MB] (35 MBps) Copying: 108/1024 [MB] (36 MBps) Copying: 142/1024 [MB] (34 MBps) Copying: 179/1024 [MB] (36 MBps) Copying: 214/1024 [MB] (35 MBps) Copying: 250/1024 [MB] (35 MBps) Copying: 286/1024 [MB] (35 MBps) Copying: 322/1024 [MB] (36 MBps) Copying: 358/1024 [MB] (35 MBps) Copying: 394/1024 [MB] (36 MBps) Copying: 430/1024 [MB] (35 MBps) Copying: 466/1024 [MB] (36 MBps) Copying: 502/1024 [MB] (35 MBps) Copying: 538/1024 [MB] (35 MBps) Copying: 574/1024 [MB] (35 MBps) Copying: 609/1024 [MB] (35 MBps) Copying: 644/1024 [MB] (35 MBps) Copying: 681/1024 [MB] (36 MBps) Copying: 716/1024 [MB] (35 MBps) Copying: 752/1024 [MB] (35 MBps) Copying: 787/1024 [MB] (35 MBps) Copying: 823/1024 [MB] (35 MBps) Copying: 859/1024 [MB] (36 MBps) Copying: 893/1024 [MB] (33 MBps) Copying: 929/1024 [MB] (36 MBps) Copying: 966/1024 [MB] (36 MBps) Copying: 1002/1024 [MB] (35 MBps) Copying: 1024/1024 [MB] (average 35 MBps)[2024-07-15 15:23:47.892934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.135 [2024-07-15 15:23:47.893004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:10.135 [2024-07-15 15:23:47.893020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:10.135 [2024-07-15 15:23:47.893029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.135 [2024-07-15 15:23:47.893052] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:10.135 [2024-07-15 15:23:47.897377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.135 [2024-07-15 15:23:47.897413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:10.135 [2024-07-15 15:23:47.897425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.317 ms 00:28:10.135 [2024-07-15 15:23:47.897432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.135 [2024-07-15 15:23:47.897626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.135 [2024-07-15 15:23:47.897635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:10.135 [2024-07-15 15:23:47.897643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:28:10.135 [2024-07-15 15:23:47.897651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.135 [2024-07-15 15:23:47.900525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.135 [2024-07-15 15:23:47.900544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:10.135 [2024-07-15 15:23:47.900552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.867 ms 00:28:10.135 [2024-07-15 15:23:47.900559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.135 [2024-07-15 15:23:47.905637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.135 [2024-07-15 15:23:47.905662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:10.135 [2024-07-15 15:23:47.905675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.073 ms 00:28:10.135 [2024-07-15 15:23:47.905682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.135 [2024-07-15 15:23:47.944368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.135 [2024-07-15 15:23:47.944408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:10.135 [2024-07-15 15:23:47.944419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.680 ms 00:28:10.135 [2024-07-15 15:23:47.944426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.135 [2024-07-15 15:23:47.967240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.135 [2024-07-15 15:23:47.967278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:10.135 [2024-07-15 15:23:47.967290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.820 ms 00:28:10.135 [2024-07-15 15:23:47.967299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.135 [2024-07-15 15:23:47.970664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.135 [2024-07-15 15:23:47.970697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:10.135 [2024-07-15 15:23:47.970708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.332 ms 00:28:10.135 [2024-07-15 15:23:47.970722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.135 [2024-07-15 15:23:48.008643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.135 [2024-07-15 15:23:48.008680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:10.135 [2024-07-15 15:23:48.008691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.977 ms 00:28:10.135 [2024-07-15 15:23:48.008698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.135 [2024-07-15 15:23:48.048088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.135 [2024-07-15 15:23:48.048126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:10.135 [2024-07-15 15:23:48.048137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.430 ms 00:28:10.135 [2024-07-15 15:23:48.048144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.135 [2024-07-15 15:23:48.087155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.135 [2024-07-15 15:23:48.087215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:10.135 [2024-07-15 15:23:48.087241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.050 ms 00:28:10.135 [2024-07-15 15:23:48.087249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.135 [2024-07-15 15:23:48.125963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.135 [2024-07-15 15:23:48.126012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:10.135 [2024-07-15 15:23:48.126025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.713 ms 00:28:10.135 [2024-07-15 15:23:48.126032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.135 [2024-07-15 15:23:48.126085] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:10.135 [2024-07-15 15:23:48.126100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:10.135 [2024-07-15 15:23:48.126110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:28:10.135 [2024-07-15 15:23:48.126118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:10.135 [2024-07-15 15:23:48.126363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:10.136 [2024-07-15 15:23:48.126878] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:10.136 [2024-07-15 15:23:48.126889] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3e1764c2-857f-4f54-b9e9-71a481f2125b 00:28:10.136 [2024-07-15 15:23:48.126897] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:28:10.136 [2024-07-15 15:23:48.126904] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:10.136 [2024-07-15 15:23:48.126917] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:10.136 [2024-07-15 15:23:48.126925] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:10.136 [2024-07-15 15:23:48.126931] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:10.136 [2024-07-15 15:23:48.126939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:10.136 [2024-07-15 15:23:48.126946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:10.136 [2024-07-15 15:23:48.126953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:10.136 [2024-07-15 15:23:48.126959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:10.136 [2024-07-15 15:23:48.126967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.136 [2024-07-15 15:23:48.126975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:10.136 [2024-07-15 15:23:48.126984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.885 ms 00:28:10.136 [2024-07-15 15:23:48.126999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.136 [2024-07-15 15:23:48.147821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.136 [2024-07-15 15:23:48.147873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:10.136 [2024-07-15 15:23:48.147893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.827 ms 00:28:10.136 [2024-07-15 15:23:48.147900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.136 [2024-07-15 15:23:48.148423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.136 [2024-07-15 15:23:48.148435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:10.136 [2024-07-15 15:23:48.148443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:28:10.136 [2024-07-15 15:23:48.148451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.136 [2024-07-15 15:23:48.195114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.136 [2024-07-15 15:23:48.195171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:10.136 [2024-07-15 15:23:48.195185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.136 [2024-07-15 15:23:48.195210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.136 [2024-07-15 15:23:48.195284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.136 [2024-07-15 15:23:48.195293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:10.136 [2024-07-15 15:23:48.195302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.136 [2024-07-15 15:23:48.195311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.136 [2024-07-15 15:23:48.195398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.136 [2024-07-15 15:23:48.195412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:10.136 [2024-07-15 15:23:48.195421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.136 [2024-07-15 15:23:48.195429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.136 [2024-07-15 15:23:48.195447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.136 [2024-07-15 15:23:48.195456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:10.136 [2024-07-15 15:23:48.195464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.136 [2024-07-15 15:23:48.195472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.396 [2024-07-15 15:23:48.319971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.396 [2024-07-15 15:23:48.320049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:10.396 [2024-07-15 15:23:48.320064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.396 [2024-07-15 15:23:48.320073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.396 [2024-07-15 15:23:48.425135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.396 [2024-07-15 15:23:48.425199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:10.396 [2024-07-15 15:23:48.425212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.396 [2024-07-15 15:23:48.425220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.396 [2024-07-15 15:23:48.425289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.396 [2024-07-15 15:23:48.425305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:10.396 [2024-07-15 15:23:48.425314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.396 [2024-07-15 15:23:48.425321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.396 [2024-07-15 15:23:48.425351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.396 [2024-07-15 15:23:48.425360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:10.396 [2024-07-15 15:23:48.425368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.396 [2024-07-15 15:23:48.425375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.396 [2024-07-15 15:23:48.425480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.396 [2024-07-15 15:23:48.425495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:10.396 [2024-07-15 15:23:48.425504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.396 [2024-07-15 15:23:48.425511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.396 [2024-07-15 15:23:48.425547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.396 [2024-07-15 15:23:48.425557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:10.396 [2024-07-15 15:23:48.425565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.396 [2024-07-15 15:23:48.425572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.396 [2024-07-15 15:23:48.425608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.396 [2024-07-15 15:23:48.425616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:10.396 [2024-07-15 15:23:48.425627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.396 [2024-07-15 15:23:48.425634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.396 [2024-07-15 15:23:48.425674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.396 [2024-07-15 15:23:48.425683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:10.396 [2024-07-15 15:23:48.425691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.396 [2024-07-15 15:23:48.425699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.396 [2024-07-15 15:23:48.425813] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 533.884 ms, result 0 00:28:11.773 00:28:11.773 00:28:11.774 15:23:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:13.677 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:13.677 15:23:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:13.677 15:23:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:13.677 15:23:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:13.677 15:23:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:13.677 15:23:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:13.677 15:23:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:13.677 15:23:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:13.677 15:23:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 84108 00:28:13.677 15:23:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@948 -- # '[' -z 84108 ']' 00:28:13.677 Process with pid 84108 is not found 00:28:13.677 15:23:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # kill -0 84108 00:28:13.677 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84108) - No such process 00:28:13.677 15:23:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@975 -- # echo 'Process with pid 84108 is not found' 00:28:13.677 15:23:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:13.937 Remove shared memory files 00:28:13.937 15:23:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:13.937 15:23:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:13.937 15:23:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:13.937 15:23:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:13.937 15:23:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:13.937 15:23:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:13.937 15:23:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:13.937 ************************************ 00:28:13.937 END TEST ftl_dirty_shutdown 00:28:13.937 ************************************ 00:28:13.937 00:28:13.937 real 3m0.517s 00:28:13.937 user 3m27.861s 00:28:13.937 sys 0m26.666s 00:28:13.937 15:23:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:13.937 15:23:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:14.197 15:23:52 ftl -- common/autotest_common.sh@1142 -- # return 0 00:28:14.197 15:23:52 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:14.197 15:23:52 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:28:14.197 15:23:52 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.197 15:23:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:14.197 ************************************ 00:28:14.197 START TEST ftl_upgrade_shutdown 00:28:14.197 ************************************ 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:14.197 * Looking for test storage... 00:28:14.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:14.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.197 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86046 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86046 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86046 ']' 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:14.198 15:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:14.198 [2024-07-15 15:23:52.296435] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:28:14.198 [2024-07-15 15:23:52.297062] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86046 ] 00:28:14.458 [2024-07-15 15:23:52.461910] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.716 [2024-07-15 15:23:52.708323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:15.657 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:15.946 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:15.946 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:15.946 15:23:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:15.946 15:23:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:28:15.946 15:23:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:15.946 15:23:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:15.946 15:23:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:15.946 15:23:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:16.206 15:23:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:16.206 { 00:28:16.206 "name": "basen1", 00:28:16.206 "aliases": [ 00:28:16.206 "fbfadce2-7909-4027-8d11-134cb76316df" 00:28:16.206 ], 00:28:16.206 "product_name": "NVMe disk", 00:28:16.206 "block_size": 4096, 00:28:16.206 "num_blocks": 1310720, 00:28:16.206 "uuid": "fbfadce2-7909-4027-8d11-134cb76316df", 00:28:16.206 "assigned_rate_limits": { 00:28:16.206 "rw_ios_per_sec": 0, 00:28:16.206 "rw_mbytes_per_sec": 0, 00:28:16.206 "r_mbytes_per_sec": 0, 00:28:16.206 "w_mbytes_per_sec": 0 00:28:16.206 }, 00:28:16.206 "claimed": true, 00:28:16.206 "claim_type": "read_many_write_one", 00:28:16.207 "zoned": false, 00:28:16.207 "supported_io_types": { 00:28:16.207 "read": true, 00:28:16.207 "write": true, 00:28:16.207 "unmap": true, 00:28:16.207 "flush": true, 00:28:16.207 "reset": true, 00:28:16.207 "nvme_admin": true, 00:28:16.207 "nvme_io": true, 00:28:16.207 "nvme_io_md": false, 00:28:16.207 "write_zeroes": true, 00:28:16.207 "zcopy": false, 00:28:16.207 "get_zone_info": false, 00:28:16.207 "zone_management": false, 00:28:16.207 "zone_append": false, 00:28:16.207 "compare": true, 00:28:16.207 "compare_and_write": false, 00:28:16.207 "abort": true, 00:28:16.207 "seek_hole": false, 00:28:16.207 "seek_data": false, 00:28:16.207 "copy": true, 00:28:16.207 "nvme_iov_md": false 00:28:16.207 }, 00:28:16.207 "driver_specific": { 00:28:16.207 "nvme": [ 00:28:16.207 { 00:28:16.207 "pci_address": "0000:00:11.0", 00:28:16.207 "trid": { 00:28:16.207 "trtype": "PCIe", 00:28:16.207 "traddr": "0000:00:11.0" 00:28:16.207 }, 00:28:16.207 "ctrlr_data": { 00:28:16.207 "cntlid": 0, 00:28:16.207 "vendor_id": "0x1b36", 00:28:16.207 "model_number": "QEMU NVMe Ctrl", 00:28:16.207 "serial_number": "12341", 00:28:16.207 "firmware_revision": "8.0.0", 00:28:16.207 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:16.207 "oacs": { 00:28:16.207 "security": 0, 00:28:16.207 "format": 1, 00:28:16.207 "firmware": 0, 00:28:16.207 "ns_manage": 1 00:28:16.207 }, 00:28:16.207 "multi_ctrlr": false, 00:28:16.207 "ana_reporting": false 00:28:16.207 }, 00:28:16.207 "vs": { 00:28:16.207 "nvme_version": "1.4" 00:28:16.207 }, 00:28:16.207 "ns_data": { 00:28:16.207 "id": 1, 00:28:16.207 "can_share": false 00:28:16.207 } 00:28:16.207 } 00:28:16.207 ], 00:28:16.207 "mp_policy": "active_passive" 00:28:16.207 } 00:28:16.207 } 00:28:16.207 ]' 00:28:16.207 15:23:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:16.207 15:23:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:16.207 15:23:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:16.207 15:23:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:16.207 15:23:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:16.207 15:23:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:28:16.467 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:16.467 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:16.467 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:16.467 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:16.467 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:16.467 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=cc1af4f6-c736-4cbd-999f-e3f047d3d87c 00:28:16.467 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:16.467 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cc1af4f6-c736-4cbd-999f-e3f047d3d87c 00:28:16.727 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:16.987 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=e2c11311-a6a4-4b9d-92fc-1a169dc5b07d 00:28:16.987 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u e2c11311-a6a4-4b9d-92fc-1a169dc5b07d 00:28:17.245 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=2e7c88dd-637d-4946-b71a-d619451b668b 00:28:17.245 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 2e7c88dd-637d-4946-b71a-d619451b668b ]] 00:28:17.245 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 2e7c88dd-637d-4946-b71a-d619451b668b 5120 00:28:17.245 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:17.245 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:17.245 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=2e7c88dd-637d-4946-b71a-d619451b668b 00:28:17.245 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:17.245 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 2e7c88dd-637d-4946-b71a-d619451b668b 00:28:17.245 15:23:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=2e7c88dd-637d-4946-b71a-d619451b668b 00:28:17.245 15:23:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:17.245 15:23:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:17.245 15:23:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:17.245 15:23:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e7c88dd-637d-4946-b71a-d619451b668b 00:28:17.504 15:23:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:17.504 { 00:28:17.504 "name": "2e7c88dd-637d-4946-b71a-d619451b668b", 00:28:17.504 "aliases": [ 00:28:17.504 "lvs/basen1p0" 00:28:17.504 ], 00:28:17.504 "product_name": "Logical Volume", 00:28:17.504 "block_size": 4096, 00:28:17.504 "num_blocks": 5242880, 00:28:17.504 "uuid": "2e7c88dd-637d-4946-b71a-d619451b668b", 00:28:17.504 "assigned_rate_limits": { 00:28:17.504 "rw_ios_per_sec": 0, 00:28:17.504 "rw_mbytes_per_sec": 0, 00:28:17.504 "r_mbytes_per_sec": 0, 00:28:17.504 "w_mbytes_per_sec": 0 00:28:17.504 }, 00:28:17.505 "claimed": false, 00:28:17.505 "zoned": false, 00:28:17.505 "supported_io_types": { 00:28:17.505 "read": true, 00:28:17.505 "write": true, 00:28:17.505 "unmap": true, 00:28:17.505 "flush": false, 00:28:17.505 "reset": true, 00:28:17.505 "nvme_admin": false, 00:28:17.505 "nvme_io": false, 00:28:17.505 "nvme_io_md": false, 00:28:17.505 "write_zeroes": true, 00:28:17.505 "zcopy": false, 00:28:17.505 "get_zone_info": false, 00:28:17.505 "zone_management": false, 00:28:17.505 "zone_append": false, 00:28:17.505 "compare": false, 00:28:17.505 "compare_and_write": false, 00:28:17.505 "abort": false, 00:28:17.505 "seek_hole": true, 00:28:17.505 "seek_data": true, 00:28:17.505 "copy": false, 00:28:17.505 "nvme_iov_md": false 00:28:17.505 }, 00:28:17.505 "driver_specific": { 00:28:17.505 "lvol": { 00:28:17.505 "lvol_store_uuid": "e2c11311-a6a4-4b9d-92fc-1a169dc5b07d", 00:28:17.505 "base_bdev": "basen1", 00:28:17.505 "thin_provision": true, 00:28:17.505 "num_allocated_clusters": 0, 00:28:17.505 "snapshot": false, 00:28:17.505 "clone": false, 00:28:17.505 "esnap_clone": false 00:28:17.505 } 00:28:17.505 } 00:28:17.505 } 00:28:17.505 ]' 00:28:17.505 15:23:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:17.505 15:23:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:17.505 15:23:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:17.505 15:23:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:28:17.505 15:23:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:28:17.505 15:23:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:28:17.505 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:17.505 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:17.505 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:17.764 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:17.764 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:17.764 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:18.022 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:18.022 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:18.022 15:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 2e7c88dd-637d-4946-b71a-d619451b668b -c cachen1p0 --l2p_dram_limit 2 00:28:18.283 [2024-07-15 15:23:56.212575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.283 [2024-07-15 15:23:56.212639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:18.283 [2024-07-15 15:23:56.212656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:18.283 [2024-07-15 15:23:56.212666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.283 [2024-07-15 15:23:56.212738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.283 [2024-07-15 15:23:56.212751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:18.283 [2024-07-15 15:23:56.212760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:28:18.283 [2024-07-15 15:23:56.212771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.283 [2024-07-15 15:23:56.212793] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:18.283 [2024-07-15 15:23:56.214186] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:18.283 [2024-07-15 15:23:56.214215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.283 [2024-07-15 15:23:56.214230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:18.283 [2024-07-15 15:23:56.214240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.431 ms 00:28:18.283 [2024-07-15 15:23:56.214250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.283 [2024-07-15 15:23:56.214328] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID fde2acbb-f600-4c55-ac99-4c607b8f33d1 00:28:18.283 [2024-07-15 15:23:56.215829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.283 [2024-07-15 15:23:56.215867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:18.283 [2024-07-15 15:23:56.215883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:18.283 [2024-07-15 15:23:56.215891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.283 [2024-07-15 15:23:56.223532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.283 [2024-07-15 15:23:56.223568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:18.283 [2024-07-15 15:23:56.223585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.604 ms 00:28:18.283 [2024-07-15 15:23:56.223593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.283 [2024-07-15 15:23:56.223658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.283 [2024-07-15 15:23:56.223674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:18.283 [2024-07-15 15:23:56.223685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:18.283 [2024-07-15 15:23:56.223694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.283 [2024-07-15 15:23:56.223782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.283 [2024-07-15 15:23:56.223794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:18.283 [2024-07-15 15:23:56.223804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:18.283 [2024-07-15 15:23:56.223815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.283 [2024-07-15 15:23:56.223844] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:18.283 [2024-07-15 15:23:56.230356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.283 [2024-07-15 15:23:56.230397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:18.283 [2024-07-15 15:23:56.230408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.535 ms 00:28:18.283 [2024-07-15 15:23:56.230418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.283 [2024-07-15 15:23:56.230450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.283 [2024-07-15 15:23:56.230462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:18.283 [2024-07-15 15:23:56.230471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:18.283 [2024-07-15 15:23:56.230481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.283 [2024-07-15 15:23:56.230519] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:18.283 [2024-07-15 15:23:56.230669] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:18.283 [2024-07-15 15:23:56.230681] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:18.283 [2024-07-15 15:23:56.230696] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:28:18.283 [2024-07-15 15:23:56.230708] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:18.283 [2024-07-15 15:23:56.230721] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:18.283 [2024-07-15 15:23:56.230730] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:18.284 [2024-07-15 15:23:56.230741] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:18.284 [2024-07-15 15:23:56.230752] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:18.284 [2024-07-15 15:23:56.230762] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:18.284 [2024-07-15 15:23:56.230770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.284 [2024-07-15 15:23:56.230780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:18.284 [2024-07-15 15:23:56.230789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.254 ms 00:28:18.284 [2024-07-15 15:23:56.230799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.284 [2024-07-15 15:23:56.230876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.284 [2024-07-15 15:23:56.230887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:18.284 [2024-07-15 15:23:56.230896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:28:18.284 [2024-07-15 15:23:56.230906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.284 [2024-07-15 15:23:56.231019] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:18.284 [2024-07-15 15:23:56.231037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:18.284 [2024-07-15 15:23:56.231045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:18.284 [2024-07-15 15:23:56.231055] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.284 [2024-07-15 15:23:56.231064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:18.284 [2024-07-15 15:23:56.231089] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:18.284 [2024-07-15 15:23:56.231111] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:18.284 [2024-07-15 15:23:56.231122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:18.284 [2024-07-15 15:23:56.231131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:18.284 [2024-07-15 15:23:56.231140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.284 [2024-07-15 15:23:56.231148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:18.284 [2024-07-15 15:23:56.231158] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:18.284 [2024-07-15 15:23:56.231166] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.284 [2024-07-15 15:23:56.231176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:18.284 [2024-07-15 15:23:56.231183] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:18.284 [2024-07-15 15:23:56.231192] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.284 [2024-07-15 15:23:56.231200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:18.284 [2024-07-15 15:23:56.231211] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:18.284 [2024-07-15 15:23:56.231219] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.284 [2024-07-15 15:23:56.231227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:18.284 [2024-07-15 15:23:56.231235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:18.284 [2024-07-15 15:23:56.231244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:18.284 [2024-07-15 15:23:56.231251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:18.284 [2024-07-15 15:23:56.231260] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:18.284 [2024-07-15 15:23:56.231267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:18.284 [2024-07-15 15:23:56.231276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:18.284 [2024-07-15 15:23:56.231284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:18.284 [2024-07-15 15:23:56.231293] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:18.284 [2024-07-15 15:23:56.231301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:18.284 [2024-07-15 15:23:56.231311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:18.284 [2024-07-15 15:23:56.231318] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:18.284 [2024-07-15 15:23:56.231327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:18.284 [2024-07-15 15:23:56.231334] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:18.284 [2024-07-15 15:23:56.231344] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.284 [2024-07-15 15:23:56.231352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:18.284 [2024-07-15 15:23:56.231360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:18.284 [2024-07-15 15:23:56.231368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.284 [2024-07-15 15:23:56.231378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:18.284 [2024-07-15 15:23:56.231385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:18.284 [2024-07-15 15:23:56.231394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.284 [2024-07-15 15:23:56.231401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:18.284 [2024-07-15 15:23:56.231411] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:18.284 [2024-07-15 15:23:56.231418] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.284 [2024-07-15 15:23:56.231426] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:18.284 [2024-07-15 15:23:56.231435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:18.284 [2024-07-15 15:23:56.231445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:18.284 [2024-07-15 15:23:56.231453] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.284 [2024-07-15 15:23:56.231463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:18.284 [2024-07-15 15:23:56.231470] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:18.284 [2024-07-15 15:23:56.231489] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:18.284 [2024-07-15 15:23:56.231498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:18.284 [2024-07-15 15:23:56.231510] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:18.284 [2024-07-15 15:23:56.231519] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:18.284 [2024-07-15 15:23:56.231536] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:18.284 [2024-07-15 15:23:56.231547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:18.284 [2024-07-15 15:23:56.231561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:18.284 [2024-07-15 15:23:56.231569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:18.284 [2024-07-15 15:23:56.231580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:18.284 [2024-07-15 15:23:56.231588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:18.284 [2024-07-15 15:23:56.231597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:18.284 [2024-07-15 15:23:56.231605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:18.284 [2024-07-15 15:23:56.231616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:18.284 [2024-07-15 15:23:56.231624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:18.284 [2024-07-15 15:23:56.231633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:18.284 [2024-07-15 15:23:56.231641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:18.284 [2024-07-15 15:23:56.231652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:18.284 [2024-07-15 15:23:56.231660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:18.284 [2024-07-15 15:23:56.231670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:18.284 [2024-07-15 15:23:56.231678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:18.284 [2024-07-15 15:23:56.231688] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:18.284 [2024-07-15 15:23:56.231697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:18.284 [2024-07-15 15:23:56.231707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:18.284 [2024-07-15 15:23:56.231716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:18.284 [2024-07-15 15:23:56.231725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:18.284 [2024-07-15 15:23:56.231733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:18.284 [2024-07-15 15:23:56.231744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.284 [2024-07-15 15:23:56.231752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:18.284 [2024-07-15 15:23:56.231762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.800 ms 00:28:18.284 [2024-07-15 15:23:56.231770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.284 [2024-07-15 15:23:56.231824] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:18.284 [2024-07-15 15:23:56.231834] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:21.590 [2024-07-15 15:23:59.529728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.590 [2024-07-15 15:23:59.529804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:21.590 [2024-07-15 15:23:59.529821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3304.258 ms 00:28:21.590 [2024-07-15 15:23:59.529829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.590 [2024-07-15 15:23:59.574085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.590 [2024-07-15 15:23:59.574144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:21.590 [2024-07-15 15:23:59.574159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.013 ms 00:28:21.590 [2024-07-15 15:23:59.574168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.590 [2024-07-15 15:23:59.574287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.590 [2024-07-15 15:23:59.574298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:21.591 [2024-07-15 15:23:59.574309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:21.591 [2024-07-15 15:23:59.574319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.591 [2024-07-15 15:23:59.626916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.591 [2024-07-15 15:23:59.626975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:21.591 [2024-07-15 15:23:59.627002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.657 ms 00:28:21.591 [2024-07-15 15:23:59.627011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.591 [2024-07-15 15:23:59.627077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.591 [2024-07-15 15:23:59.627089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:21.591 [2024-07-15 15:23:59.627099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:21.591 [2024-07-15 15:23:59.627107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.591 [2024-07-15 15:23:59.627620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.591 [2024-07-15 15:23:59.627647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:21.591 [2024-07-15 15:23:59.627659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.442 ms 00:28:21.591 [2024-07-15 15:23:59.627667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.591 [2024-07-15 15:23:59.627726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.591 [2024-07-15 15:23:59.627740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:21.591 [2024-07-15 15:23:59.627753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:21.591 [2024-07-15 15:23:59.627761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.591 [2024-07-15 15:23:59.649828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.591 [2024-07-15 15:23:59.649887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:21.591 [2024-07-15 15:23:59.649902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.081 ms 00:28:21.591 [2024-07-15 15:23:59.649909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.591 [2024-07-15 15:23:59.665199] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:21.591 [2024-07-15 15:23:59.666337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.591 [2024-07-15 15:23:59.666367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:21.591 [2024-07-15 15:23:59.666380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.332 ms 00:28:21.591 [2024-07-15 15:23:59.666391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.853 [2024-07-15 15:23:59.714097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.853 [2024-07-15 15:23:59.714193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:21.853 [2024-07-15 15:23:59.714210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.741 ms 00:28:21.853 [2024-07-15 15:23:59.714221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.853 [2024-07-15 15:23:59.714370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.853 [2024-07-15 15:23:59.714389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:21.853 [2024-07-15 15:23:59.714399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:28:21.853 [2024-07-15 15:23:59.714412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.853 [2024-07-15 15:23:59.762326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.853 [2024-07-15 15:23:59.762406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:21.853 [2024-07-15 15:23:59.762421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.919 ms 00:28:21.853 [2024-07-15 15:23:59.762431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.853 [2024-07-15 15:23:59.806456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.853 [2024-07-15 15:23:59.806535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:21.853 [2024-07-15 15:23:59.806549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.017 ms 00:28:21.853 [2024-07-15 15:23:59.806558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.853 [2024-07-15 15:23:59.807570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.853 [2024-07-15 15:23:59.807601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:21.853 [2024-07-15 15:23:59.807613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.908 ms 00:28:21.853 [2024-07-15 15:23:59.807627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.853 [2024-07-15 15:23:59.931384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.853 [2024-07-15 15:23:59.931467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:21.853 [2024-07-15 15:23:59.931483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 123.906 ms 00:28:21.853 [2024-07-15 15:23:59.931497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.129 [2024-07-15 15:23:59.977600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.129 [2024-07-15 15:23:59.977670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:22.129 [2024-07-15 15:23:59.977686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.138 ms 00:28:22.129 [2024-07-15 15:23:59.977699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.129 [2024-07-15 15:24:00.023792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.129 [2024-07-15 15:24:00.023860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:22.129 [2024-07-15 15:24:00.023887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.133 ms 00:28:22.129 [2024-07-15 15:24:00.023897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.129 [2024-07-15 15:24:00.064755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.129 [2024-07-15 15:24:00.064828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:22.129 [2024-07-15 15:24:00.064841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.884 ms 00:28:22.129 [2024-07-15 15:24:00.064851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.129 [2024-07-15 15:24:00.064906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.129 [2024-07-15 15:24:00.064916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:22.129 [2024-07-15 15:24:00.064925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:22.129 [2024-07-15 15:24:00.064938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.129 [2024-07-15 15:24:00.065068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.129 [2024-07-15 15:24:00.065083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:22.129 [2024-07-15 15:24:00.065094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:22.129 [2024-07-15 15:24:00.065103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.129 [2024-07-15 15:24:00.066462] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3860.631 ms, result 0 00:28:22.129 { 00:28:22.129 "name": "ftl", 00:28:22.129 "uuid": "fde2acbb-f600-4c55-ac99-4c607b8f33d1" 00:28:22.129 } 00:28:22.129 15:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:22.388 [2024-07-15 15:24:00.276921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.388 15:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:22.388 15:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:22.647 [2024-07-15 15:24:00.668532] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:22.647 15:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:22.907 [2024-07-15 15:24:00.870752] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:22.907 15:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:23.168 Fill FTL, iteration 1 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=86174 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 86174 /var/tmp/spdk.tgt.sock 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86174 ']' 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:23.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:23.168 15:24:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:23.425 [2024-07-15 15:24:01.368857] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:28:23.425 [2024-07-15 15:24:01.369084] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86174 ] 00:28:23.425 [2024-07-15 15:24:01.537430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.992 [2024-07-15 15:24:01.834003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.949 15:24:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:24.949 15:24:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:28:24.949 15:24:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:25.209 ftln1 00:28:25.209 15:24:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:25.209 15:24:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:25.469 15:24:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:25.469 15:24:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 86174 00:28:25.469 15:24:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86174 ']' 00:28:25.469 15:24:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86174 00:28:25.469 15:24:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:28:25.469 15:24:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:25.469 15:24:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86174 00:28:25.469 killing process with pid 86174 00:28:25.469 15:24:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:25.469 15:24:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:25.469 15:24:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86174' 00:28:25.469 15:24:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86174 00:28:25.469 15:24:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86174 00:28:28.007 15:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:28.007 15:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:28.266 [2024-07-15 15:24:06.164572] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:28:28.266 [2024-07-15 15:24:06.164699] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86232 ] 00:28:28.266 [2024-07-15 15:24:06.331336] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.524 [2024-07-15 15:24:06.579161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.431  Copying: 228/1024 [MB] (228 MBps) Copying: 458/1024 [MB] (230 MBps) Copying: 679/1024 [MB] (221 MBps) Copying: 906/1024 [MB] (227 MBps) Copying: 1024/1024 [MB] (average 226 MBps) 00:28:35.431 00:28:35.431 Calculate MD5 checksum, iteration 1 00:28:35.431 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:35.431 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:35.431 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:35.431 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:35.431 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:35.431 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:35.431 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:35.431 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:35.431 [2024-07-15 15:24:13.244590] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:28:35.431 [2024-07-15 15:24:13.244698] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86306 ] 00:28:35.431 [2024-07-15 15:24:13.413171] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.694 [2024-07-15 15:24:13.673346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.970  Copying: 629/1024 [MB] (629 MBps) Copying: 1024/1024 [MB] (average 624 MBps) 00:28:38.970 00:28:38.970 15:24:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:38.970 15:24:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:40.868 15:24:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:40.868 Fill FTL, iteration 2 00:28:40.868 15:24:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7d04650d73398d8fa3ab4a193fb20bfb 00:28:40.868 15:24:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:40.868 15:24:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:40.868 15:24:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:40.868 15:24:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:40.868 15:24:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:40.868 15:24:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:40.868 15:24:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:40.868 15:24:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:40.868 15:24:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:41.126 [2024-07-15 15:24:19.003157] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:28:41.127 [2024-07-15 15:24:19.003285] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86373 ] 00:28:41.127 [2024-07-15 15:24:19.163259] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.384 [2024-07-15 15:24:19.416874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.110  Copying: 204/1024 [MB] (204 MBps) Copying: 426/1024 [MB] (222 MBps) Copying: 649/1024 [MB] (223 MBps) Copying: 852/1024 [MB] (203 MBps) Copying: 1024/1024 [MB] (average 215 MBps) 00:28:48.110 00:28:48.110 Calculate MD5 checksum, iteration 2 00:28:48.110 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:48.110 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:48.110 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:48.110 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:48.110 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:48.110 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:48.110 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:48.110 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:48.110 [2024-07-15 15:24:26.120870] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:28:48.110 [2024-07-15 15:24:26.121030] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86448 ] 00:28:48.369 [2024-07-15 15:24:26.283088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.628 [2024-07-15 15:24:26.528858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.049  Copying: 578/1024 [MB] (578 MBps) Copying: 1024/1024 [MB] (average 563 MBps) 00:28:53.049 00:28:53.049 15:24:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:53.049 15:24:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:54.421 15:24:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:54.421 15:24:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=33f1892fece898ee3404f5d062fe6fc9 00:28:54.421 15:24:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:54.421 15:24:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:54.421 15:24:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:54.681 [2024-07-15 15:24:32.664030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.681 [2024-07-15 15:24:32.664109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:54.681 [2024-07-15 15:24:32.664128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:54.681 [2024-07-15 15:24:32.664138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.681 [2024-07-15 15:24:32.664177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.681 [2024-07-15 15:24:32.664190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:54.681 [2024-07-15 15:24:32.664200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:54.681 [2024-07-15 15:24:32.664217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.681 [2024-07-15 15:24:32.664239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.681 [2024-07-15 15:24:32.664250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:54.681 [2024-07-15 15:24:32.664272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:54.681 [2024-07-15 15:24:32.664282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.681 [2024-07-15 15:24:32.664359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.355 ms, result 0 00:28:54.681 true 00:28:54.681 15:24:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:54.941 { 00:28:54.941 "name": "ftl", 00:28:54.941 "properties": [ 00:28:54.941 { 00:28:54.941 "name": "superblock_version", 00:28:54.941 "value": 5, 00:28:54.941 "read-only": true 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "name": "base_device", 00:28:54.941 "bands": [ 00:28:54.941 { 00:28:54.941 "id": 0, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 1, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 2, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 3, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 4, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 5, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 6, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 7, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 8, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 9, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 10, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 11, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 12, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 13, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 14, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 15, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 16, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 17, 00:28:54.941 "state": "FREE", 00:28:54.941 "validity": 0.0 00:28:54.941 } 00:28:54.941 ], 00:28:54.941 "read-only": true 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "name": "cache_device", 00:28:54.941 "type": "bdev", 00:28:54.941 "chunks": [ 00:28:54.941 { 00:28:54.941 "id": 0, 00:28:54.941 "state": "INACTIVE", 00:28:54.941 "utilization": 0.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 1, 00:28:54.941 "state": "CLOSED", 00:28:54.941 "utilization": 1.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 2, 00:28:54.941 "state": "CLOSED", 00:28:54.941 "utilization": 1.0 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 3, 00:28:54.941 "state": "OPEN", 00:28:54.941 "utilization": 0.001953125 00:28:54.941 }, 00:28:54.941 { 00:28:54.941 "id": 4, 00:28:54.942 "state": "OPEN", 00:28:54.942 "utilization": 0.0 00:28:54.942 } 00:28:54.942 ], 00:28:54.942 "read-only": true 00:28:54.942 }, 00:28:54.942 { 00:28:54.942 "name": "verbose_mode", 00:28:54.942 "value": true, 00:28:54.942 "unit": "", 00:28:54.942 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:54.942 }, 00:28:54.942 { 00:28:54.942 "name": "prep_upgrade_on_shutdown", 00:28:54.942 "value": false, 00:28:54.942 "unit": "", 00:28:54.942 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:54.942 } 00:28:54.942 ] 00:28:54.942 } 00:28:54.942 15:24:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:55.201 [2024-07-15 15:24:33.079653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.201 [2024-07-15 15:24:33.079733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:55.201 [2024-07-15 15:24:33.079751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:55.201 [2024-07-15 15:24:33.079760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.201 [2024-07-15 15:24:33.079796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.201 [2024-07-15 15:24:33.079808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:55.201 [2024-07-15 15:24:33.079818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:55.201 [2024-07-15 15:24:33.079827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.201 [2024-07-15 15:24:33.079848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.201 [2024-07-15 15:24:33.079858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:55.201 [2024-07-15 15:24:33.079868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:55.201 [2024-07-15 15:24:33.079877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.201 [2024-07-15 15:24:33.079947] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.297 ms, result 0 00:28:55.201 true 00:28:55.201 15:24:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:55.201 15:24:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:55.201 15:24:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:55.460 15:24:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:55.460 15:24:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:55.460 15:24:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:55.460 [2024-07-15 15:24:33.475255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.460 [2024-07-15 15:24:33.475427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:55.460 [2024-07-15 15:24:33.475467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:55.460 [2024-07-15 15:24:33.475492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.460 [2024-07-15 15:24:33.475547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.460 [2024-07-15 15:24:33.475589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:55.460 [2024-07-15 15:24:33.475617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:55.460 [2024-07-15 15:24:33.475685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.460 [2024-07-15 15:24:33.475731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.460 [2024-07-15 15:24:33.475785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:55.460 [2024-07-15 15:24:33.475835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:55.460 [2024-07-15 15:24:33.475868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.460 [2024-07-15 15:24:33.475971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.714 ms, result 0 00:28:55.460 true 00:28:55.460 15:24:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:55.719 { 00:28:55.719 "name": "ftl", 00:28:55.719 "properties": [ 00:28:55.719 { 00:28:55.719 "name": "superblock_version", 00:28:55.719 "value": 5, 00:28:55.719 "read-only": true 00:28:55.719 }, 00:28:55.719 { 00:28:55.719 "name": "base_device", 00:28:55.719 "bands": [ 00:28:55.719 { 00:28:55.719 "id": 0, 00:28:55.719 "state": "FREE", 00:28:55.719 "validity": 0.0 00:28:55.719 }, 00:28:55.719 { 00:28:55.719 "id": 1, 00:28:55.719 "state": "FREE", 00:28:55.719 "validity": 0.0 00:28:55.719 }, 00:28:55.719 { 00:28:55.719 "id": 2, 00:28:55.719 "state": "FREE", 00:28:55.719 "validity": 0.0 00:28:55.719 }, 00:28:55.719 { 00:28:55.719 "id": 3, 00:28:55.719 "state": "FREE", 00:28:55.719 "validity": 0.0 00:28:55.719 }, 00:28:55.719 { 00:28:55.719 "id": 4, 00:28:55.719 "state": "FREE", 00:28:55.719 "validity": 0.0 00:28:55.719 }, 00:28:55.719 { 00:28:55.719 "id": 5, 00:28:55.719 "state": "FREE", 00:28:55.719 "validity": 0.0 00:28:55.719 }, 00:28:55.719 { 00:28:55.720 "id": 6, 00:28:55.720 "state": "FREE", 00:28:55.720 "validity": 0.0 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "id": 7, 00:28:55.720 "state": "FREE", 00:28:55.720 "validity": 0.0 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "id": 8, 00:28:55.720 "state": "FREE", 00:28:55.720 "validity": 0.0 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "id": 9, 00:28:55.720 "state": "FREE", 00:28:55.720 "validity": 0.0 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "id": 10, 00:28:55.720 "state": "FREE", 00:28:55.720 "validity": 0.0 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "id": 11, 00:28:55.720 "state": "FREE", 00:28:55.720 "validity": 0.0 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "id": 12, 00:28:55.720 "state": "FREE", 00:28:55.720 "validity": 0.0 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "id": 13, 00:28:55.720 "state": "FREE", 00:28:55.720 "validity": 0.0 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "id": 14, 00:28:55.720 "state": "FREE", 00:28:55.720 "validity": 0.0 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "id": 15, 00:28:55.720 "state": "FREE", 00:28:55.720 "validity": 0.0 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "id": 16, 00:28:55.720 "state": "FREE", 00:28:55.720 "validity": 0.0 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "id": 17, 00:28:55.720 "state": "FREE", 00:28:55.720 "validity": 0.0 00:28:55.720 } 00:28:55.720 ], 00:28:55.720 "read-only": true 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "name": "cache_device", 00:28:55.720 "type": "bdev", 00:28:55.720 "chunks": [ 00:28:55.720 { 00:28:55.720 "id": 0, 00:28:55.720 "state": "INACTIVE", 00:28:55.720 "utilization": 0.0 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "id": 1, 00:28:55.720 "state": "CLOSED", 00:28:55.720 "utilization": 1.0 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "id": 2, 00:28:55.720 "state": "CLOSED", 00:28:55.720 "utilization": 1.0 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "id": 3, 00:28:55.720 "state": "OPEN", 00:28:55.720 "utilization": 0.001953125 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "id": 4, 00:28:55.720 "state": "OPEN", 00:28:55.720 "utilization": 0.0 00:28:55.720 } 00:28:55.720 ], 00:28:55.720 "read-only": true 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "name": "verbose_mode", 00:28:55.720 "value": true, 00:28:55.720 "unit": "", 00:28:55.720 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:55.720 }, 00:28:55.720 { 00:28:55.720 "name": "prep_upgrade_on_shutdown", 00:28:55.720 "value": true, 00:28:55.720 "unit": "", 00:28:55.720 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:55.720 } 00:28:55.720 ] 00:28:55.720 } 00:28:55.720 15:24:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:55.720 15:24:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86046 ]] 00:28:55.720 15:24:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86046 00:28:55.720 15:24:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86046 ']' 00:28:55.720 15:24:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86046 00:28:55.720 15:24:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:28:55.720 15:24:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:55.720 15:24:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86046 00:28:55.720 15:24:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:55.720 killing process with pid 86046 00:28:55.720 15:24:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:55.720 15:24:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86046' 00:28:55.720 15:24:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86046 00:28:55.720 15:24:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86046 00:28:57.099 [2024-07-15 15:24:35.169964] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:57.099 [2024-07-15 15:24:35.194549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.099 [2024-07-15 15:24:35.194652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:57.099 [2024-07-15 15:24:35.194670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:57.099 [2024-07-15 15:24:35.194680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.099 [2024-07-15 15:24:35.194710] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:57.099 [2024-07-15 15:24:35.199545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.099 [2024-07-15 15:24:35.199579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:57.099 [2024-07-15 15:24:35.199594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.827 ms 00:28:57.099 [2024-07-15 15:24:35.199604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.234 [2024-07-15 15:24:42.721663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.234 [2024-07-15 15:24:42.721753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:05.234 [2024-07-15 15:24:42.721771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7536.521 ms 00:29:05.234 [2024-07-15 15:24:42.721780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.234 [2024-07-15 15:24:42.723047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.234 [2024-07-15 15:24:42.723077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:05.234 [2024-07-15 15:24:42.723098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.251 ms 00:29:05.234 [2024-07-15 15:24:42.723108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.234 [2024-07-15 15:24:42.724140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.234 [2024-07-15 15:24:42.724169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:05.234 [2024-07-15 15:24:42.724178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.000 ms 00:29:05.234 [2024-07-15 15:24:42.724186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.234 [2024-07-15 15:24:42.740806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.235 [2024-07-15 15:24:42.740847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:05.235 [2024-07-15 15:24:42.740859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.611 ms 00:29:05.235 [2024-07-15 15:24:42.740867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:42.751435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.235 [2024-07-15 15:24:42.751477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:05.235 [2024-07-15 15:24:42.751491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.551 ms 00:29:05.235 [2024-07-15 15:24:42.751500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:42.751595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.235 [2024-07-15 15:24:42.751608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:05.235 [2024-07-15 15:24:42.751618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:29:05.235 [2024-07-15 15:24:42.751628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:42.770176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.235 [2024-07-15 15:24:42.770231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:29:05.235 [2024-07-15 15:24:42.770243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.561 ms 00:29:05.235 [2024-07-15 15:24:42.770252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:42.790381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.235 [2024-07-15 15:24:42.790463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:29:05.235 [2024-07-15 15:24:42.790480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.121 ms 00:29:05.235 [2024-07-15 15:24:42.790490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:42.810376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.235 [2024-07-15 15:24:42.810451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:05.235 [2024-07-15 15:24:42.810466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.856 ms 00:29:05.235 [2024-07-15 15:24:42.810475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:42.829110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.235 [2024-07-15 15:24:42.829176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:05.235 [2024-07-15 15:24:42.829189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.508 ms 00:29:05.235 [2024-07-15 15:24:42.829197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:42.829237] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:05.235 [2024-07-15 15:24:42.829268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:05.235 [2024-07-15 15:24:42.829278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:05.235 [2024-07-15 15:24:42.829287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:05.235 [2024-07-15 15:24:42.829297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:05.235 [2024-07-15 15:24:42.829306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:05.235 [2024-07-15 15:24:42.829314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:05.235 [2024-07-15 15:24:42.829322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:05.235 [2024-07-15 15:24:42.829332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:05.235 [2024-07-15 15:24:42.829341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:05.235 [2024-07-15 15:24:42.829348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:05.235 [2024-07-15 15:24:42.829357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:05.235 [2024-07-15 15:24:42.829365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:05.235 [2024-07-15 15:24:42.829373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:05.235 [2024-07-15 15:24:42.829380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:05.235 [2024-07-15 15:24:42.829388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:05.235 [2024-07-15 15:24:42.829421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:05.235 [2024-07-15 15:24:42.829429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:05.235 [2024-07-15 15:24:42.829438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:05.235 [2024-07-15 15:24:42.829450] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:05.235 [2024-07-15 15:24:42.829459] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: fde2acbb-f600-4c55-ac99-4c607b8f33d1 00:29:05.235 [2024-07-15 15:24:42.829468] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:05.235 [2024-07-15 15:24:42.829476] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:05.235 [2024-07-15 15:24:42.829485] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:05.235 [2024-07-15 15:24:42.829494] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:05.235 [2024-07-15 15:24:42.829502] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:05.235 [2024-07-15 15:24:42.829511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:05.235 [2024-07-15 15:24:42.829520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:05.235 [2024-07-15 15:24:42.829527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:05.235 [2024-07-15 15:24:42.829535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:05.235 [2024-07-15 15:24:42.829544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.235 [2024-07-15 15:24:42.829554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:05.235 [2024-07-15 15:24:42.829566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:29:05.235 [2024-07-15 15:24:42.829574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:42.856897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.235 [2024-07-15 15:24:42.856965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:05.235 [2024-07-15 15:24:42.856981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.329 ms 00:29:05.235 [2024-07-15 15:24:42.857005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:42.857720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.235 [2024-07-15 15:24:42.857748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:05.235 [2024-07-15 15:24:42.857759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.657 ms 00:29:05.235 [2024-07-15 15:24:42.857768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:42.934785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.235 [2024-07-15 15:24:42.934865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:05.235 [2024-07-15 15:24:42.934880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.235 [2024-07-15 15:24:42.934891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:42.934964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.235 [2024-07-15 15:24:42.934982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:05.235 [2024-07-15 15:24:42.935004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.235 [2024-07-15 15:24:42.935014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:42.935170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.235 [2024-07-15 15:24:42.935184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:05.235 [2024-07-15 15:24:42.935193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.235 [2024-07-15 15:24:42.935202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:42.935226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.235 [2024-07-15 15:24:42.935235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:05.235 [2024-07-15 15:24:42.935248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.235 [2024-07-15 15:24:42.935255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:43.088128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.235 [2024-07-15 15:24:43.088206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:05.235 [2024-07-15 15:24:43.088237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.235 [2024-07-15 15:24:43.088247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:43.219367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.235 [2024-07-15 15:24:43.219468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:05.235 [2024-07-15 15:24:43.219484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.235 [2024-07-15 15:24:43.219493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:43.219621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.235 [2024-07-15 15:24:43.219633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:05.235 [2024-07-15 15:24:43.219643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.235 [2024-07-15 15:24:43.219653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:43.219700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.235 [2024-07-15 15:24:43.219712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:05.235 [2024-07-15 15:24:43.219721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.235 [2024-07-15 15:24:43.219735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:43.219872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.235 [2024-07-15 15:24:43.219884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:05.235 [2024-07-15 15:24:43.219894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.235 [2024-07-15 15:24:43.219903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:43.219940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.235 [2024-07-15 15:24:43.219951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:05.235 [2024-07-15 15:24:43.219960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.235 [2024-07-15 15:24:43.219967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:43.220041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.235 [2024-07-15 15:24:43.220052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:05.235 [2024-07-15 15:24:43.220061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.235 [2024-07-15 15:24:43.220069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.235 [2024-07-15 15:24:43.220127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.236 [2024-07-15 15:24:43.220138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:05.236 [2024-07-15 15:24:43.220147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.236 [2024-07-15 15:24:43.220158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.236 [2024-07-15 15:24:43.220301] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8041.204 ms, result 0 00:29:10.509 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:10.509 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:10.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.509 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:10.509 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:10.509 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:10.509 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86682 00:29:10.509 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:10.509 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86682 00:29:10.509 15:24:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86682 ']' 00:29:10.509 15:24:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.509 15:24:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:10.509 15:24:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.509 15:24:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:10.509 15:24:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:10.509 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:10.509 [2024-07-15 15:24:47.841428] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:29:10.509 [2024-07-15 15:24:47.842111] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86682 ] 00:29:10.509 [2024-07-15 15:24:48.005878] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.509 [2024-07-15 15:24:48.249957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.447 [2024-07-15 15:24:49.278837] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:11.447 [2024-07-15 15:24:49.278902] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:11.447 [2024-07-15 15:24:49.424861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.447 [2024-07-15 15:24:49.424924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:11.447 [2024-07-15 15:24:49.424941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:11.447 [2024-07-15 15:24:49.424949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.447 [2024-07-15 15:24:49.425019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.447 [2024-07-15 15:24:49.425029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:11.447 [2024-07-15 15:24:49.425039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:29:11.447 [2024-07-15 15:24:49.425046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.447 [2024-07-15 15:24:49.425067] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:11.447 [2024-07-15 15:24:49.426295] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:11.447 [2024-07-15 15:24:49.426331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.447 [2024-07-15 15:24:49.426341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:11.447 [2024-07-15 15:24:49.426351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.270 ms 00:29:11.447 [2024-07-15 15:24:49.426359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.447 [2024-07-15 15:24:49.427842] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:11.447 [2024-07-15 15:24:49.448445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.447 [2024-07-15 15:24:49.448518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:11.447 [2024-07-15 15:24:49.448590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.643 ms 00:29:11.447 [2024-07-15 15:24:49.448623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.447 [2024-07-15 15:24:49.448715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.447 [2024-07-15 15:24:49.448755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:11.447 [2024-07-15 15:24:49.448766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:11.447 [2024-07-15 15:24:49.448774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.447 [2024-07-15 15:24:49.455515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.447 [2024-07-15 15:24:49.455545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:11.447 [2024-07-15 15:24:49.455556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.680 ms 00:29:11.447 [2024-07-15 15:24:49.455563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.447 [2024-07-15 15:24:49.455629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.447 [2024-07-15 15:24:49.455642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:11.447 [2024-07-15 15:24:49.455652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:11.447 [2024-07-15 15:24:49.455663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.447 [2024-07-15 15:24:49.455710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.447 [2024-07-15 15:24:49.455720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:11.447 [2024-07-15 15:24:49.455728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:11.447 [2024-07-15 15:24:49.455735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.447 [2024-07-15 15:24:49.455763] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:11.447 [2024-07-15 15:24:49.461448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.447 [2024-07-15 15:24:49.461475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:11.447 [2024-07-15 15:24:49.461484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.704 ms 00:29:11.447 [2024-07-15 15:24:49.461491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.447 [2024-07-15 15:24:49.461518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.447 [2024-07-15 15:24:49.461526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:11.447 [2024-07-15 15:24:49.461534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:11.447 [2024-07-15 15:24:49.461543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.447 [2024-07-15 15:24:49.461586] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:11.447 [2024-07-15 15:24:49.461607] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:11.447 [2024-07-15 15:24:49.461639] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:11.447 [2024-07-15 15:24:49.461653] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:29:11.447 [2024-07-15 15:24:49.461733] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:11.447 [2024-07-15 15:24:49.461742] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:11.447 [2024-07-15 15:24:49.461754] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:29:11.447 [2024-07-15 15:24:49.461779] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:11.447 [2024-07-15 15:24:49.461788] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:11.447 [2024-07-15 15:24:49.461796] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:11.447 [2024-07-15 15:24:49.461803] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:11.447 [2024-07-15 15:24:49.461810] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:11.447 [2024-07-15 15:24:49.461818] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:11.447 [2024-07-15 15:24:49.461826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.447 [2024-07-15 15:24:49.461834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:11.447 [2024-07-15 15:24:49.461841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.244 ms 00:29:11.447 [2024-07-15 15:24:49.461849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.447 [2024-07-15 15:24:49.461919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.447 [2024-07-15 15:24:49.461928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:11.447 [2024-07-15 15:24:49.461935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:11.447 [2024-07-15 15:24:49.461945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.447 [2024-07-15 15:24:49.462109] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:11.447 [2024-07-15 15:24:49.462141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:11.447 [2024-07-15 15:24:49.462165] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:11.447 [2024-07-15 15:24:49.462187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.447 [2024-07-15 15:24:49.462225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:11.447 [2024-07-15 15:24:49.462248] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:11.447 [2024-07-15 15:24:49.462303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:11.447 [2024-07-15 15:24:49.462326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:11.447 [2024-07-15 15:24:49.462367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:11.447 [2024-07-15 15:24:49.462394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.447 [2024-07-15 15:24:49.462421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:11.447 [2024-07-15 15:24:49.462453] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:11.447 [2024-07-15 15:24:49.462481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.447 [2024-07-15 15:24:49.462502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:11.447 [2024-07-15 15:24:49.462548] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:11.447 [2024-07-15 15:24:49.462597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.447 [2024-07-15 15:24:49.462637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:11.447 [2024-07-15 15:24:49.462661] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:11.447 [2024-07-15 15:24:49.462684] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.447 [2024-07-15 15:24:49.462706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:11.447 [2024-07-15 15:24:49.462738] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:11.447 [2024-07-15 15:24:49.462760] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.447 [2024-07-15 15:24:49.462784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:11.447 [2024-07-15 15:24:49.462832] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:11.447 [2024-07-15 15:24:49.462866] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.447 [2024-07-15 15:24:49.462898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:11.447 [2024-07-15 15:24:49.462928] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:11.447 [2024-07-15 15:24:49.462962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.447 [2024-07-15 15:24:49.463002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:11.447 [2024-07-15 15:24:49.463014] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:11.447 [2024-07-15 15:24:49.463022] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.447 [2024-07-15 15:24:49.463030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:11.447 [2024-07-15 15:24:49.463038] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:11.447 [2024-07-15 15:24:49.463046] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.447 [2024-07-15 15:24:49.463053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:11.447 [2024-07-15 15:24:49.463061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:11.447 [2024-07-15 15:24:49.463070] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.447 [2024-07-15 15:24:49.463077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:11.448 [2024-07-15 15:24:49.463085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:11.448 [2024-07-15 15:24:49.463104] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.448 [2024-07-15 15:24:49.463111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:11.448 [2024-07-15 15:24:49.463118] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:11.448 [2024-07-15 15:24:49.463128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.448 [2024-07-15 15:24:49.463135] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:11.448 [2024-07-15 15:24:49.463165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:11.448 [2024-07-15 15:24:49.463174] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:11.448 [2024-07-15 15:24:49.463182] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.448 [2024-07-15 15:24:49.463191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:11.448 [2024-07-15 15:24:49.463199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:11.448 [2024-07-15 15:24:49.463208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:11.448 [2024-07-15 15:24:49.463216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:11.448 [2024-07-15 15:24:49.463238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:11.448 [2024-07-15 15:24:49.463246] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:11.448 [2024-07-15 15:24:49.463257] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:11.448 [2024-07-15 15:24:49.463269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:11.448 [2024-07-15 15:24:49.463279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:11.448 [2024-07-15 15:24:49.463289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:11.448 [2024-07-15 15:24:49.463297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:11.448 [2024-07-15 15:24:49.463306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:11.448 [2024-07-15 15:24:49.463315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:11.448 [2024-07-15 15:24:49.463324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:11.448 [2024-07-15 15:24:49.463333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:11.448 [2024-07-15 15:24:49.463342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:11.448 [2024-07-15 15:24:49.463351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:11.448 [2024-07-15 15:24:49.463359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:11.448 [2024-07-15 15:24:49.463368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:11.448 [2024-07-15 15:24:49.463376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:11.448 [2024-07-15 15:24:49.463385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:11.448 [2024-07-15 15:24:49.463394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:11.448 [2024-07-15 15:24:49.463402] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:11.448 [2024-07-15 15:24:49.463412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:11.448 [2024-07-15 15:24:49.463422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:11.448 [2024-07-15 15:24:49.463430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:11.448 [2024-07-15 15:24:49.463438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:11.448 [2024-07-15 15:24:49.463449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:11.448 [2024-07-15 15:24:49.463460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.448 [2024-07-15 15:24:49.463469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:11.448 [2024-07-15 15:24:49.463478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.484 ms 00:29:11.448 [2024-07-15 15:24:49.463490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.448 [2024-07-15 15:24:49.463553] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:11.448 [2024-07-15 15:24:49.463566] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:14.790 [2024-07-15 15:24:52.538969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.790 [2024-07-15 15:24:52.539041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:14.790 [2024-07-15 15:24:52.539057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3081.345 ms 00:29:14.790 [2024-07-15 15:24:52.539066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.790 [2024-07-15 15:24:52.583320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.790 [2024-07-15 15:24:52.583373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:14.790 [2024-07-15 15:24:52.583390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.008 ms 00:29:14.790 [2024-07-15 15:24:52.583404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.790 [2024-07-15 15:24:52.583528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.790 [2024-07-15 15:24:52.583540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:14.790 [2024-07-15 15:24:52.583550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:14.790 [2024-07-15 15:24:52.583559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.633819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.633871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:14.791 [2024-07-15 15:24:52.633885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.316 ms 00:29:14.791 [2024-07-15 15:24:52.633892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.633942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.633950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:14.791 [2024-07-15 15:24:52.633959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:14.791 [2024-07-15 15:24:52.633965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.634477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.634495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:14.791 [2024-07-15 15:24:52.634507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.415 ms 00:29:14.791 [2024-07-15 15:24:52.634514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.634556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.634565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:14.791 [2024-07-15 15:24:52.634572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:14.791 [2024-07-15 15:24:52.634586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.656988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.657042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:14.791 [2024-07-15 15:24:52.657056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.406 ms 00:29:14.791 [2024-07-15 15:24:52.657064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.677648] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:14.791 [2024-07-15 15:24:52.677691] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:14.791 [2024-07-15 15:24:52.677704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.677712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:14.791 [2024-07-15 15:24:52.677721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.542 ms 00:29:14.791 [2024-07-15 15:24:52.677728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.698970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.699014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:14.791 [2024-07-15 15:24:52.699026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.229 ms 00:29:14.791 [2024-07-15 15:24:52.699033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.718066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.718115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:14.791 [2024-07-15 15:24:52.718127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.022 ms 00:29:14.791 [2024-07-15 15:24:52.718134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.737389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.737420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:14.791 [2024-07-15 15:24:52.737429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.255 ms 00:29:14.791 [2024-07-15 15:24:52.737436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.738283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.738309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:14.791 [2024-07-15 15:24:52.738318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.741 ms 00:29:14.791 [2024-07-15 15:24:52.738324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.844951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.845019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:14.791 [2024-07-15 15:24:52.845034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 106.788 ms 00:29:14.791 [2024-07-15 15:24:52.845042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.857707] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:14.791 [2024-07-15 15:24:52.858742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.858767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:14.791 [2024-07-15 15:24:52.858780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.653 ms 00:29:14.791 [2024-07-15 15:24:52.858792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.858891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.858901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:14.791 [2024-07-15 15:24:52.858910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:14.791 [2024-07-15 15:24:52.858917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.858971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.858982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:14.791 [2024-07-15 15:24:52.859002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:14.791 [2024-07-15 15:24:52.859011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.859036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.859045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:14.791 [2024-07-15 15:24:52.859053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:14.791 [2024-07-15 15:24:52.859060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.859092] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:14.791 [2024-07-15 15:24:52.859102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.859110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:14.791 [2024-07-15 15:24:52.859118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:14.791 [2024-07-15 15:24:52.859126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.897686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.897729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:14.791 [2024-07-15 15:24:52.897743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.614 ms 00:29:14.791 [2024-07-15 15:24:52.897752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.897830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.791 [2024-07-15 15:24:52.897840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:14.791 [2024-07-15 15:24:52.897849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:29:14.791 [2024-07-15 15:24:52.897856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.791 [2024-07-15 15:24:52.899186] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3480.419 ms, result 0 00:29:15.049 [2024-07-15 15:24:52.914028] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.049 [2024-07-15 15:24:52.929979] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:15.049 [2024-07-15 15:24:52.939841] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:15.049 15:24:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:15.049 15:24:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:29:15.049 15:24:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:15.049 15:24:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:15.049 15:24:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:15.306 [2024-07-15 15:24:53.163499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.306 [2024-07-15 15:24:53.163555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:15.306 [2024-07-15 15:24:53.163571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:15.306 [2024-07-15 15:24:53.163580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.306 [2024-07-15 15:24:53.163613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.306 [2024-07-15 15:24:53.163626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:15.306 [2024-07-15 15:24:53.163635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:15.306 [2024-07-15 15:24:53.163644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.306 [2024-07-15 15:24:53.163663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.306 [2024-07-15 15:24:53.163672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:15.306 [2024-07-15 15:24:53.163681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:15.306 [2024-07-15 15:24:53.163690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.306 [2024-07-15 15:24:53.163755] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.251 ms, result 0 00:29:15.306 true 00:29:15.306 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:15.306 { 00:29:15.306 "name": "ftl", 00:29:15.306 "properties": [ 00:29:15.306 { 00:29:15.306 "name": "superblock_version", 00:29:15.306 "value": 5, 00:29:15.306 "read-only": true 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "name": "base_device", 00:29:15.306 "bands": [ 00:29:15.306 { 00:29:15.306 "id": 0, 00:29:15.306 "state": "CLOSED", 00:29:15.306 "validity": 1.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 1, 00:29:15.306 "state": "CLOSED", 00:29:15.306 "validity": 1.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 2, 00:29:15.306 "state": "CLOSED", 00:29:15.306 "validity": 0.007843137254901933 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 3, 00:29:15.306 "state": "FREE", 00:29:15.306 "validity": 0.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 4, 00:29:15.306 "state": "FREE", 00:29:15.306 "validity": 0.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 5, 00:29:15.306 "state": "FREE", 00:29:15.306 "validity": 0.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 6, 00:29:15.306 "state": "FREE", 00:29:15.306 "validity": 0.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 7, 00:29:15.306 "state": "FREE", 00:29:15.306 "validity": 0.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 8, 00:29:15.306 "state": "FREE", 00:29:15.306 "validity": 0.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 9, 00:29:15.306 "state": "FREE", 00:29:15.306 "validity": 0.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 10, 00:29:15.306 "state": "FREE", 00:29:15.306 "validity": 0.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 11, 00:29:15.306 "state": "FREE", 00:29:15.306 "validity": 0.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 12, 00:29:15.306 "state": "FREE", 00:29:15.306 "validity": 0.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 13, 00:29:15.306 "state": "FREE", 00:29:15.306 "validity": 0.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 14, 00:29:15.306 "state": "FREE", 00:29:15.306 "validity": 0.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 15, 00:29:15.306 "state": "FREE", 00:29:15.306 "validity": 0.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 16, 00:29:15.306 "state": "FREE", 00:29:15.306 "validity": 0.0 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "id": 17, 00:29:15.306 "state": "FREE", 00:29:15.306 "validity": 0.0 00:29:15.306 } 00:29:15.306 ], 00:29:15.306 "read-only": true 00:29:15.306 }, 00:29:15.306 { 00:29:15.306 "name": "cache_device", 00:29:15.307 "type": "bdev", 00:29:15.307 "chunks": [ 00:29:15.307 { 00:29:15.307 "id": 0, 00:29:15.307 "state": "INACTIVE", 00:29:15.307 "utilization": 0.0 00:29:15.307 }, 00:29:15.307 { 00:29:15.307 "id": 1, 00:29:15.307 "state": "OPEN", 00:29:15.307 "utilization": 0.0 00:29:15.307 }, 00:29:15.307 { 00:29:15.307 "id": 2, 00:29:15.307 "state": "OPEN", 00:29:15.307 "utilization": 0.0 00:29:15.307 }, 00:29:15.307 { 00:29:15.307 "id": 3, 00:29:15.307 "state": "FREE", 00:29:15.307 "utilization": 0.0 00:29:15.307 }, 00:29:15.307 { 00:29:15.307 "id": 4, 00:29:15.307 "state": "FREE", 00:29:15.307 "utilization": 0.0 00:29:15.307 } 00:29:15.307 ], 00:29:15.307 "read-only": true 00:29:15.307 }, 00:29:15.307 { 00:29:15.307 "name": "verbose_mode", 00:29:15.307 "value": true, 00:29:15.307 "unit": "", 00:29:15.307 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:15.307 }, 00:29:15.307 { 00:29:15.307 "name": "prep_upgrade_on_shutdown", 00:29:15.307 "value": false, 00:29:15.307 "unit": "", 00:29:15.307 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:15.307 } 00:29:15.307 ] 00:29:15.307 } 00:29:15.307 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:15.307 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:15.307 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:15.564 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:15.564 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:15.564 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:15.564 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:15.564 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:15.821 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:15.822 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:15.822 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:15.822 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:15.822 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:15.822 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:15.822 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:15.822 Validate MD5 checksum, iteration 1 00:29:15.822 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:15.822 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:15.822 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:15.822 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:15.822 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:15.822 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:15.822 [2024-07-15 15:24:53.893974] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:29:15.822 [2024-07-15 15:24:53.894192] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86762 ] 00:29:16.080 [2024-07-15 15:24:54.044895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.339 [2024-07-15 15:24:54.302609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.032  Copying: 654/1024 [MB] (654 MBps) Copying: 1024/1024 [MB] (average 641 MBps) 00:29:21.032 00:29:21.032 15:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:21.032 15:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:22.411 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:22.411 Validate MD5 checksum, iteration 2 00:29:22.411 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7d04650d73398d8fa3ab4a193fb20bfb 00:29:22.411 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7d04650d73398d8fa3ab4a193fb20bfb != \7\d\0\4\6\5\0\d\7\3\3\9\8\d\8\f\a\3\a\b\4\a\1\9\3\f\b\2\0\b\f\b ]] 00:29:22.411 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:22.411 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:22.411 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:22.411 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:22.411 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:22.411 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:22.411 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:22.411 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:22.411 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:22.411 [2024-07-15 15:25:00.472136] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:29:22.411 [2024-07-15 15:25:00.472390] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86829 ] 00:29:22.671 [2024-07-15 15:25:00.636235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.931 [2024-07-15 15:25:00.880177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.609  Copying: 595/1024 [MB] (595 MBps) Copying: 1024/1024 [MB] (average 593 MBps) 00:29:29.609 00:29:29.609 15:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:29.609 15:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=33f1892fece898ee3404f5d062fe6fc9 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 33f1892fece898ee3404f5d062fe6fc9 != \3\3\f\1\8\9\2\f\e\c\e\8\9\8\e\e\3\4\0\4\f\5\d\0\6\2\f\e\6\f\c\9 ]] 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 86682 ]] 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 86682 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86923 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86923 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86923 ']' 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:30.983 15:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:30.983 [2024-07-15 15:25:09.044128] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:29:30.983 [2024-07-15 15:25:09.044335] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86923 ] 00:29:31.242 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 86682 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:31.242 [2024-07-15 15:25:09.212206] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.500 [2024-07-15 15:25:09.492935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.874 [2024-07-15 15:25:10.561243] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:32.874 [2024-07-15 15:25:10.561414] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:32.874 [2024-07-15 15:25:10.709672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.874 [2024-07-15 15:25:10.709810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:32.874 [2024-07-15 15:25:10.709853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:32.874 [2024-07-15 15:25:10.709877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.874 [2024-07-15 15:25:10.709993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.874 [2024-07-15 15:25:10.710059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:32.874 [2024-07-15 15:25:10.710096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:29:32.874 [2024-07-15 15:25:10.710137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.874 [2024-07-15 15:25:10.710196] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:32.874 [2024-07-15 15:25:10.711637] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:32.874 [2024-07-15 15:25:10.711731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.874 [2024-07-15 15:25:10.711779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:32.874 [2024-07-15 15:25:10.711822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.547 ms 00:29:32.874 [2024-07-15 15:25:10.711856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.875 [2024-07-15 15:25:10.712284] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:32.875 [2024-07-15 15:25:10.741553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.875 [2024-07-15 15:25:10.741685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:32.875 [2024-07-15 15:25:10.741722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.323 ms 00:29:32.875 [2024-07-15 15:25:10.741754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.875 [2024-07-15 15:25:10.759553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.875 [2024-07-15 15:25:10.759669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:32.875 [2024-07-15 15:25:10.759723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:29:32.875 [2024-07-15 15:25:10.759763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.875 [2024-07-15 15:25:10.760247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.875 [2024-07-15 15:25:10.760302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:32.875 [2024-07-15 15:25:10.760358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.317 ms 00:29:32.875 [2024-07-15 15:25:10.760376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.875 [2024-07-15 15:25:10.760450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.875 [2024-07-15 15:25:10.760465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:32.875 [2024-07-15 15:25:10.760475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:29:32.875 [2024-07-15 15:25:10.760484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.875 [2024-07-15 15:25:10.760523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.875 [2024-07-15 15:25:10.760533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:32.875 [2024-07-15 15:25:10.760541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:32.875 [2024-07-15 15:25:10.760552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.875 [2024-07-15 15:25:10.760582] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:32.875 [2024-07-15 15:25:10.766262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.875 [2024-07-15 15:25:10.766292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:32.875 [2024-07-15 15:25:10.766303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.701 ms 00:29:32.875 [2024-07-15 15:25:10.766310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.875 [2024-07-15 15:25:10.766357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.875 [2024-07-15 15:25:10.766367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:32.875 [2024-07-15 15:25:10.766376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:32.875 [2024-07-15 15:25:10.766384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.875 [2024-07-15 15:25:10.766424] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:32.875 [2024-07-15 15:25:10.766447] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:32.875 [2024-07-15 15:25:10.766486] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:32.875 [2024-07-15 15:25:10.766502] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:29:32.875 [2024-07-15 15:25:10.766602] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:32.875 [2024-07-15 15:25:10.766613] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:32.875 [2024-07-15 15:25:10.766624] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:29:32.875 [2024-07-15 15:25:10.766652] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:32.875 [2024-07-15 15:25:10.766661] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:32.875 [2024-07-15 15:25:10.766671] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:32.875 [2024-07-15 15:25:10.766680] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:32.875 [2024-07-15 15:25:10.766692] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:32.875 [2024-07-15 15:25:10.766701] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:32.875 [2024-07-15 15:25:10.766711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.875 [2024-07-15 15:25:10.766719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:32.875 [2024-07-15 15:25:10.766732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.290 ms 00:29:32.875 [2024-07-15 15:25:10.766741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.875 [2024-07-15 15:25:10.766823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.875 [2024-07-15 15:25:10.766833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:32.875 [2024-07-15 15:25:10.766842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:29:32.875 [2024-07-15 15:25:10.766851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.875 [2024-07-15 15:25:10.766960] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:32.875 [2024-07-15 15:25:10.766972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:32.875 [2024-07-15 15:25:10.766981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:32.875 [2024-07-15 15:25:10.766991] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.875 [2024-07-15 15:25:10.767000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:32.875 [2024-07-15 15:25:10.767024] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:32.875 [2024-07-15 15:25:10.767033] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:32.875 [2024-07-15 15:25:10.767041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:32.875 [2024-07-15 15:25:10.767051] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:32.875 [2024-07-15 15:25:10.767061] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.875 [2024-07-15 15:25:10.767069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:32.875 [2024-07-15 15:25:10.767077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:32.875 [2024-07-15 15:25:10.767085] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.875 [2024-07-15 15:25:10.767093] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:32.875 [2024-07-15 15:25:10.767100] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:32.875 [2024-07-15 15:25:10.767108] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.875 [2024-07-15 15:25:10.767116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:32.875 [2024-07-15 15:25:10.767124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:32.875 [2024-07-15 15:25:10.767131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.875 [2024-07-15 15:25:10.767139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:32.875 [2024-07-15 15:25:10.767147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:32.875 [2024-07-15 15:25:10.767155] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:32.875 [2024-07-15 15:25:10.767163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:32.875 [2024-07-15 15:25:10.767170] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:32.875 [2024-07-15 15:25:10.767179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:32.875 [2024-07-15 15:25:10.767186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:32.875 [2024-07-15 15:25:10.767194] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:32.875 [2024-07-15 15:25:10.767201] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:32.875 [2024-07-15 15:25:10.767209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:32.875 [2024-07-15 15:25:10.767217] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:32.875 [2024-07-15 15:25:10.767225] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:32.875 [2024-07-15 15:25:10.767232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:32.875 [2024-07-15 15:25:10.767240] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:32.875 [2024-07-15 15:25:10.767247] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.875 [2024-07-15 15:25:10.767255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:32.875 [2024-07-15 15:25:10.767263] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:32.875 [2024-07-15 15:25:10.767270] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.875 [2024-07-15 15:25:10.767278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:32.875 [2024-07-15 15:25:10.767287] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:32.875 [2024-07-15 15:25:10.767295] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.875 [2024-07-15 15:25:10.767303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:32.875 [2024-07-15 15:25:10.767311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:32.875 [2024-07-15 15:25:10.767319] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.875 [2024-07-15 15:25:10.767326] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:32.875 [2024-07-15 15:25:10.767335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:32.875 [2024-07-15 15:25:10.767344] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:32.875 [2024-07-15 15:25:10.767352] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.875 [2024-07-15 15:25:10.767360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:32.875 [2024-07-15 15:25:10.767369] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:32.875 [2024-07-15 15:25:10.767391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:32.875 [2024-07-15 15:25:10.767400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:32.875 [2024-07-15 15:25:10.767419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:32.875 [2024-07-15 15:25:10.767427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:32.875 [2024-07-15 15:25:10.767436] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:32.875 [2024-07-15 15:25:10.767450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:32.875 [2024-07-15 15:25:10.767458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:32.875 [2024-07-15 15:25:10.767467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:32.875 [2024-07-15 15:25:10.767475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:32.875 [2024-07-15 15:25:10.767483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:32.875 [2024-07-15 15:25:10.767491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:32.875 [2024-07-15 15:25:10.767502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:32.876 [2024-07-15 15:25:10.767510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:32.876 [2024-07-15 15:25:10.767518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:32.876 [2024-07-15 15:25:10.767526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:32.876 [2024-07-15 15:25:10.767534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:32.876 [2024-07-15 15:25:10.767542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:32.876 [2024-07-15 15:25:10.767550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:32.876 [2024-07-15 15:25:10.767558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:32.876 [2024-07-15 15:25:10.767567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:32.876 [2024-07-15 15:25:10.767575] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:32.876 [2024-07-15 15:25:10.767583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:32.876 [2024-07-15 15:25:10.767593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:32.876 [2024-07-15 15:25:10.767602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:32.876 [2024-07-15 15:25:10.767610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:32.876 [2024-07-15 15:25:10.767618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:32.876 [2024-07-15 15:25:10.767627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.876 [2024-07-15 15:25:10.767636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:32.876 [2024-07-15 15:25:10.767644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.732 ms 00:29:32.876 [2024-07-15 15:25:10.767652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.876 [2024-07-15 15:25:10.813563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.876 [2024-07-15 15:25:10.813616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:32.876 [2024-07-15 15:25:10.813630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.934 ms 00:29:32.876 [2024-07-15 15:25:10.813638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.876 [2024-07-15 15:25:10.813701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.876 [2024-07-15 15:25:10.813710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:32.876 [2024-07-15 15:25:10.813719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:32.876 [2024-07-15 15:25:10.813730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.876 [2024-07-15 15:25:10.864210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.876 [2024-07-15 15:25:10.864279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:32.876 [2024-07-15 15:25:10.864317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.495 ms 00:29:32.876 [2024-07-15 15:25:10.864330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.876 [2024-07-15 15:25:10.864413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.876 [2024-07-15 15:25:10.864435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:32.876 [2024-07-15 15:25:10.864449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:32.876 [2024-07-15 15:25:10.864461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.876 [2024-07-15 15:25:10.864622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.876 [2024-07-15 15:25:10.864642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:32.876 [2024-07-15 15:25:10.864655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:29:32.876 [2024-07-15 15:25:10.864667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.876 [2024-07-15 15:25:10.864725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.876 [2024-07-15 15:25:10.864738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:32.876 [2024-07-15 15:25:10.864755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:32.876 [2024-07-15 15:25:10.864766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.876 [2024-07-15 15:25:10.885760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.876 [2024-07-15 15:25:10.885849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:32.876 [2024-07-15 15:25:10.885886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.003 ms 00:29:32.876 [2024-07-15 15:25:10.885899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.876 [2024-07-15 15:25:10.886111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.876 [2024-07-15 15:25:10.886134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:32.876 [2024-07-15 15:25:10.886148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:32.876 [2024-07-15 15:25:10.886160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.876 [2024-07-15 15:25:10.923096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.876 [2024-07-15 15:25:10.923152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:32.876 [2024-07-15 15:25:10.923166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.980 ms 00:29:32.876 [2024-07-15 15:25:10.923175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.876 [2024-07-15 15:25:10.939276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.876 [2024-07-15 15:25:10.939320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:32.876 [2024-07-15 15:25:10.939333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.841 ms 00:29:32.876 [2024-07-15 15:25:10.939342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.135 [2024-07-15 15:25:11.034319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.135 [2024-07-15 15:25:11.034384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:33.135 [2024-07-15 15:25:11.034398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 95.072 ms 00:29:33.135 [2024-07-15 15:25:11.034406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.135 [2024-07-15 15:25:11.034662] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:33.136 [2024-07-15 15:25:11.034809] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:33.136 [2024-07-15 15:25:11.034944] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:33.136 [2024-07-15 15:25:11.035149] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:33.136 [2024-07-15 15:25:11.035163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.136 [2024-07-15 15:25:11.035173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:33.136 [2024-07-15 15:25:11.035183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.662 ms 00:29:33.136 [2024-07-15 15:25:11.035192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.136 [2024-07-15 15:25:11.035286] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:33.136 [2024-07-15 15:25:11.035298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.136 [2024-07-15 15:25:11.035308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:33.136 [2024-07-15 15:25:11.035318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:33.136 [2024-07-15 15:25:11.035326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.136 [2024-07-15 15:25:11.060554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.136 [2024-07-15 15:25:11.060601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:33.136 [2024-07-15 15:25:11.060613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.252 ms 00:29:33.136 [2024-07-15 15:25:11.060641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.136 [2024-07-15 15:25:11.076051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.136 [2024-07-15 15:25:11.076088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:33.136 [2024-07-15 15:25:11.076097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:33.136 [2024-07-15 15:25:11.076105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.136 [2024-07-15 15:25:11.076369] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:33.705 [2024-07-15 15:25:11.587500] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:33.705 [2024-07-15 15:25:11.587712] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:34.272 [2024-07-15 15:25:12.114368] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:34.272 [2024-07-15 15:25:12.114511] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:34.272 [2024-07-15 15:25:12.114545] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:34.272 [2024-07-15 15:25:12.114560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.272 [2024-07-15 15:25:12.114571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:34.272 [2024-07-15 15:25:12.114593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1040.385 ms 00:29:34.273 [2024-07-15 15:25:12.114603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.273 [2024-07-15 15:25:12.114648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.273 [2024-07-15 15:25:12.114660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:34.273 [2024-07-15 15:25:12.114669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:34.273 [2024-07-15 15:25:12.114678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.273 [2024-07-15 15:25:12.129063] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:34.273 [2024-07-15 15:25:12.129197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.273 [2024-07-15 15:25:12.129214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:34.273 [2024-07-15 15:25:12.129225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.528 ms 00:29:34.273 [2024-07-15 15:25:12.129233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.273 [2024-07-15 15:25:12.129843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.273 [2024-07-15 15:25:12.129860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:34.273 [2024-07-15 15:25:12.129869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.514 ms 00:29:34.273 [2024-07-15 15:25:12.129876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.273 [2024-07-15 15:25:12.131999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.273 [2024-07-15 15:25:12.132032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:34.273 [2024-07-15 15:25:12.132041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.109 ms 00:29:34.273 [2024-07-15 15:25:12.132048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.273 [2024-07-15 15:25:12.132090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.273 [2024-07-15 15:25:12.132100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:34.273 [2024-07-15 15:25:12.132107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:34.273 [2024-07-15 15:25:12.132115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.273 [2024-07-15 15:25:12.132220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.273 [2024-07-15 15:25:12.132230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:34.273 [2024-07-15 15:25:12.132241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:34.273 [2024-07-15 15:25:12.132248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.273 [2024-07-15 15:25:12.132269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.273 [2024-07-15 15:25:12.132277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:34.273 [2024-07-15 15:25:12.132288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:34.273 [2024-07-15 15:25:12.132295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.273 [2024-07-15 15:25:12.132322] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:34.273 [2024-07-15 15:25:12.132332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.273 [2024-07-15 15:25:12.132339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:34.273 [2024-07-15 15:25:12.132346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:34.273 [2024-07-15 15:25:12.132356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.273 [2024-07-15 15:25:12.132406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.273 [2024-07-15 15:25:12.132415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:34.273 [2024-07-15 15:25:12.132423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:29:34.273 [2024-07-15 15:25:12.132430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.273 [2024-07-15 15:25:12.133568] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1426.028 ms, result 0 00:29:34.273 [2024-07-15 15:25:12.145969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.273 [2024-07-15 15:25:12.161970] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:34.273 [2024-07-15 15:25:12.172675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:34.273 15:25:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:34.273 15:25:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:29:34.273 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:34.273 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:34.273 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:34.273 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:34.273 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:34.273 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:34.273 Validate MD5 checksum, iteration 1 00:29:34.273 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:34.273 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:34.273 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:34.273 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:34.273 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:34.273 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:34.273 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:34.273 [2024-07-15 15:25:12.325945] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:29:34.273 [2024-07-15 15:25:12.326146] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86959 ] 00:29:34.531 [2024-07-15 15:25:12.504066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.789 [2024-07-15 15:25:12.766685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.555  Copying: 610/1024 [MB] (610 MBps) Copying: 1024/1024 [MB] (average 588 MBps) 00:29:40.555 00:29:40.555 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:40.555 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:42.509 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:42.509 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7d04650d73398d8fa3ab4a193fb20bfb 00:29:42.509 Validate MD5 checksum, iteration 2 00:29:42.509 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7d04650d73398d8fa3ab4a193fb20bfb != \7\d\0\4\6\5\0\d\7\3\3\9\8\d\8\f\a\3\a\b\4\a\1\9\3\f\b\2\0\b\f\b ]] 00:29:42.509 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:42.509 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:42.509 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:42.509 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:42.509 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:42.509 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:42.509 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:42.509 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:42.509 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:42.509 [2024-07-15 15:25:20.586807] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:29:42.509 [2024-07-15 15:25:20.587026] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87048 ] 00:29:42.768 [2024-07-15 15:25:20.773754] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.026 [2024-07-15 15:25:21.014133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.396  Copying: 574/1024 [MB] (574 MBps) Copying: 1024/1024 [MB] (average 570 MBps) 00:29:47.396 00:29:47.396 15:25:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:47.396 15:25:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=33f1892fece898ee3404f5d062fe6fc9 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 33f1892fece898ee3404f5d062fe6fc9 != \3\3\f\1\8\9\2\f\e\c\e\8\9\8\e\e\3\4\0\4\f\5\d\0\6\2\f\e\6\f\c\9 ]] 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86923 ]] 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86923 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86923 ']' 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86923 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86923 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86923' 00:29:49.295 killing process with pid 86923 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86923 00:29:49.295 15:25:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86923 00:29:50.673 [2024-07-15 15:25:28.437524] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:50.673 [2024-07-15 15:25:28.457424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.673 [2024-07-15 15:25:28.457474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:50.673 [2024-07-15 15:25:28.457487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:50.673 [2024-07-15 15:25:28.457510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.673 [2024-07-15 15:25:28.457532] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:50.673 [2024-07-15 15:25:28.461754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.673 [2024-07-15 15:25:28.461780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:50.673 [2024-07-15 15:25:28.461789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.217 ms 00:29:50.673 [2024-07-15 15:25:28.461797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.673 [2024-07-15 15:25:28.462012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.673 [2024-07-15 15:25:28.462040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:50.673 [2024-07-15 15:25:28.462052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:29:50.673 [2024-07-15 15:25:28.462060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.673 [2024-07-15 15:25:28.463260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.673 [2024-07-15 15:25:28.463292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:50.673 [2024-07-15 15:25:28.463302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.187 ms 00:29:50.673 [2024-07-15 15:25:28.463311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.673 [2024-07-15 15:25:28.464383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.673 [2024-07-15 15:25:28.464409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:50.673 [2024-07-15 15:25:28.464420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.046 ms 00:29:50.673 [2024-07-15 15:25:28.464433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.673 [2024-07-15 15:25:28.480952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.673 [2024-07-15 15:25:28.480997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:50.673 [2024-07-15 15:25:28.481010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.503 ms 00:29:50.673 [2024-07-15 15:25:28.481017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.673 [2024-07-15 15:25:28.489850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.673 [2024-07-15 15:25:28.489899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:50.673 [2024-07-15 15:25:28.489917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.816 ms 00:29:50.673 [2024-07-15 15:25:28.489925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.673 [2024-07-15 15:25:28.490032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.673 [2024-07-15 15:25:28.490043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:50.673 [2024-07-15 15:25:28.490052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:29:50.673 [2024-07-15 15:25:28.490060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.673 [2024-07-15 15:25:28.506180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.673 [2024-07-15 15:25:28.506212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:29:50.673 [2024-07-15 15:25:28.506222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.135 ms 00:29:50.673 [2024-07-15 15:25:28.506230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.673 [2024-07-15 15:25:28.523242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.673 [2024-07-15 15:25:28.523290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:29:50.673 [2024-07-15 15:25:28.523303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.013 ms 00:29:50.673 [2024-07-15 15:25:28.523311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.673 [2024-07-15 15:25:28.541818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.673 [2024-07-15 15:25:28.541877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:50.673 [2024-07-15 15:25:28.541889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.500 ms 00:29:50.673 [2024-07-15 15:25:28.541897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.673 [2024-07-15 15:25:28.559725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.673 [2024-07-15 15:25:28.559780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:50.673 [2024-07-15 15:25:28.559794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.768 ms 00:29:50.673 [2024-07-15 15:25:28.559803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.673 [2024-07-15 15:25:28.559841] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:50.673 [2024-07-15 15:25:28.559860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:50.673 [2024-07-15 15:25:28.559871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:50.673 [2024-07-15 15:25:28.559880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:50.673 [2024-07-15 15:25:28.559890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:50.674 [2024-07-15 15:25:28.559899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:50.674 [2024-07-15 15:25:28.559908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:50.674 [2024-07-15 15:25:28.559917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:50.674 [2024-07-15 15:25:28.559926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:50.674 [2024-07-15 15:25:28.559936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:50.674 [2024-07-15 15:25:28.559944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:50.674 [2024-07-15 15:25:28.559953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:50.674 [2024-07-15 15:25:28.559962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:50.674 [2024-07-15 15:25:28.559971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:50.674 [2024-07-15 15:25:28.559980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:50.674 [2024-07-15 15:25:28.560003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:50.674 [2024-07-15 15:25:28.560013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:50.674 [2024-07-15 15:25:28.560022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:50.674 [2024-07-15 15:25:28.560030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:50.674 [2024-07-15 15:25:28.560042] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:50.674 [2024-07-15 15:25:28.560084] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: fde2acbb-f600-4c55-ac99-4c607b8f33d1 00:29:50.674 [2024-07-15 15:25:28.560100] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:50.674 [2024-07-15 15:25:28.560113] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:50.674 [2024-07-15 15:25:28.560121] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:50.674 [2024-07-15 15:25:28.560130] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:50.674 [2024-07-15 15:25:28.560138] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:50.674 [2024-07-15 15:25:28.560148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:50.674 [2024-07-15 15:25:28.560156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:50.674 [2024-07-15 15:25:28.560164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:50.674 [2024-07-15 15:25:28.560176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:50.674 [2024-07-15 15:25:28.560185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.674 [2024-07-15 15:25:28.560194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:50.674 [2024-07-15 15:25:28.560204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.346 ms 00:29:50.674 [2024-07-15 15:25:28.560212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.674 [2024-07-15 15:25:28.583240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.674 [2024-07-15 15:25:28.583297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:50.674 [2024-07-15 15:25:28.583309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.028 ms 00:29:50.674 [2024-07-15 15:25:28.583317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.674 [2024-07-15 15:25:28.583809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.674 [2024-07-15 15:25:28.583817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:50.674 [2024-07-15 15:25:28.583826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.453 ms 00:29:50.674 [2024-07-15 15:25:28.583833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.674 [2024-07-15 15:25:28.647983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.674 [2024-07-15 15:25:28.648059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:50.674 [2024-07-15 15:25:28.648074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.674 [2024-07-15 15:25:28.648082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.674 [2024-07-15 15:25:28.648137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.674 [2024-07-15 15:25:28.648147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:50.674 [2024-07-15 15:25:28.648157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.674 [2024-07-15 15:25:28.648165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.674 [2024-07-15 15:25:28.648278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.674 [2024-07-15 15:25:28.648292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:50.674 [2024-07-15 15:25:28.648301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.674 [2024-07-15 15:25:28.648309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.674 [2024-07-15 15:25:28.648341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.674 [2024-07-15 15:25:28.648350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:50.674 [2024-07-15 15:25:28.648357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.674 [2024-07-15 15:25:28.648364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.674 [2024-07-15 15:25:28.773734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.674 [2024-07-15 15:25:28.773799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:50.674 [2024-07-15 15:25:28.773812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.674 [2024-07-15 15:25:28.773821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.932 [2024-07-15 15:25:28.882379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.933 [2024-07-15 15:25:28.882444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:50.933 [2024-07-15 15:25:28.882457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.933 [2024-07-15 15:25:28.882481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.933 [2024-07-15 15:25:28.882580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.933 [2024-07-15 15:25:28.882599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:50.933 [2024-07-15 15:25:28.882606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.933 [2024-07-15 15:25:28.882614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.933 [2024-07-15 15:25:28.882652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.933 [2024-07-15 15:25:28.882661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:50.933 [2024-07-15 15:25:28.882668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.933 [2024-07-15 15:25:28.882675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.933 [2024-07-15 15:25:28.882786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.933 [2024-07-15 15:25:28.882802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:50.933 [2024-07-15 15:25:28.882810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.933 [2024-07-15 15:25:28.882817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.933 [2024-07-15 15:25:28.882852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.933 [2024-07-15 15:25:28.882863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:50.933 [2024-07-15 15:25:28.882870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.933 [2024-07-15 15:25:28.882878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.933 [2024-07-15 15:25:28.882916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.933 [2024-07-15 15:25:28.882929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:50.933 [2024-07-15 15:25:28.882937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.933 [2024-07-15 15:25:28.882944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.933 [2024-07-15 15:25:28.882988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.933 [2024-07-15 15:25:28.883017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:50.933 [2024-07-15 15:25:28.883025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.933 [2024-07-15 15:25:28.883032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.933 [2024-07-15 15:25:28.883174] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 426.540 ms, result 0 00:29:52.305 15:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:52.305 15:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:52.563 15:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:52.563 15:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:52.563 15:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:52.563 15:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:52.563 Remove shared memory files 00:29:52.563 15:25:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:52.563 15:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:52.563 15:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:52.563 15:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:52.563 15:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid86682 00:29:52.563 15:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:52.563 15:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:52.563 ************************************ 00:29:52.563 END TEST ftl_upgrade_shutdown 00:29:52.563 ************************************ 00:29:52.563 00:29:52.563 real 1m38.371s 00:29:52.563 user 2m18.903s 00:29:52.563 sys 0m21.453s 00:29:52.563 15:25:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:52.563 15:25:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:52.563 Process with pid 79977 is not found 00:29:52.563 15:25:30 ftl -- common/autotest_common.sh@1142 -- # return 0 00:29:52.563 15:25:30 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:29:52.563 15:25:30 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:29:52.563 15:25:30 ftl -- ftl/ftl.sh@14 -- # killprocess 79977 00:29:52.563 15:25:30 ftl -- common/autotest_common.sh@948 -- # '[' -z 79977 ']' 00:29:52.563 15:25:30 ftl -- common/autotest_common.sh@952 -- # kill -0 79977 00:29:52.563 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (79977) - No such process 00:29:52.563 15:25:30 ftl -- common/autotest_common.sh@975 -- # echo 'Process with pid 79977 is not found' 00:29:52.563 15:25:30 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:29:52.563 15:25:30 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:52.563 15:25:30 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=87183 00:29:52.563 15:25:30 ftl -- ftl/ftl.sh@20 -- # waitforlisten 87183 00:29:52.563 15:25:30 ftl -- common/autotest_common.sh@829 -- # '[' -z 87183 ']' 00:29:52.563 15:25:30 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.563 15:25:30 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:52.563 15:25:30 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.563 15:25:30 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:52.563 15:25:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:52.564 [2024-07-15 15:25:30.588072] Starting SPDK v24.09-pre git sha1 33d82c0da / DPDK 24.03.0 initialization... 00:29:52.564 [2024-07-15 15:25:30.588378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87183 ] 00:29:52.822 [2024-07-15 15:25:30.752865] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.081 [2024-07-15 15:25:30.997915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.049 15:25:31 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:54.049 15:25:31 ftl -- common/autotest_common.sh@862 -- # return 0 00:29:54.049 15:25:31 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:54.307 nvme0n1 00:29:54.307 15:25:32 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:29:54.307 15:25:32 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:54.307 15:25:32 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:54.565 15:25:32 ftl -- ftl/common.sh@28 -- # stores=e2c11311-a6a4-4b9d-92fc-1a169dc5b07d 00:29:54.565 15:25:32 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:29:54.565 15:25:32 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e2c11311-a6a4-4b9d-92fc-1a169dc5b07d 00:29:54.565 15:25:32 ftl -- ftl/ftl.sh@23 -- # killprocess 87183 00:29:54.565 15:25:32 ftl -- common/autotest_common.sh@948 -- # '[' -z 87183 ']' 00:29:54.565 15:25:32 ftl -- common/autotest_common.sh@952 -- # kill -0 87183 00:29:54.565 15:25:32 ftl -- common/autotest_common.sh@953 -- # uname 00:29:54.565 15:25:32 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:54.565 15:25:32 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87183 00:29:54.823 killing process with pid 87183 00:29:54.823 15:25:32 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:54.823 15:25:32 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:54.823 15:25:32 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87183' 00:29:54.823 15:25:32 ftl -- common/autotest_common.sh@967 -- # kill 87183 00:29:54.823 15:25:32 ftl -- common/autotest_common.sh@972 -- # wait 87183 00:29:57.352 15:25:35 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:57.611 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:57.870 Waiting for block devices as requested 00:29:57.870 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:57.870 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:57.870 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:58.129 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:03.436 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:03.436 15:25:41 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:30:03.436 Remove shared memory files 00:30:03.436 15:25:41 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:03.436 15:25:41 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:30:03.436 15:25:41 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:30:03.436 15:25:41 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:30:03.436 15:25:41 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:03.436 15:25:41 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:30:03.436 ************************************ 00:30:03.436 END TEST ftl 00:30:03.436 ************************************ 00:30:03.436 00:30:03.436 real 10m30.423s 00:30:03.436 user 13m28.280s 00:30:03.436 sys 1m13.518s 00:30:03.436 15:25:41 ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:03.436 15:25:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:03.436 15:25:41 -- common/autotest_common.sh@1142 -- # return 0 00:30:03.436 15:25:41 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:03.436 15:25:41 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:03.436 15:25:41 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:03.436 15:25:41 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:03.436 15:25:41 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:03.436 15:25:41 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:03.436 15:25:41 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:03.436 15:25:41 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:03.436 15:25:41 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:03.436 15:25:41 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:03.436 15:25:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:03.436 15:25:41 -- common/autotest_common.sh@10 -- # set +x 00:30:03.436 15:25:41 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:03.436 15:25:41 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:03.436 15:25:41 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:03.436 15:25:41 -- common/autotest_common.sh@10 -- # set +x 00:30:04.812 INFO: APP EXITING 00:30:04.812 INFO: killing all VMs 00:30:04.812 INFO: killing vhost app 00:30:04.812 INFO: EXIT DONE 00:30:05.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:05.641 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:05.905 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:05.905 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:05.905 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:06.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:06.732 Cleaning 00:30:06.732 Removing: /var/run/dpdk/spdk0/config 00:30:06.732 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:06.732 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:06.732 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:06.732 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:06.732 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:06.732 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:06.732 Removing: /var/run/dpdk/spdk0 00:30:06.732 Removing: /var/run/dpdk/spdk_pid62062 00:30:06.732 Removing: /var/run/dpdk/spdk_pid62299 00:30:06.732 Removing: /var/run/dpdk/spdk_pid62521 00:30:06.732 Removing: /var/run/dpdk/spdk_pid62636 00:30:06.732 Removing: /var/run/dpdk/spdk_pid62698 00:30:06.732 Removing: /var/run/dpdk/spdk_pid62837 00:30:06.732 Removing: /var/run/dpdk/spdk_pid62855 00:30:06.732 Removing: /var/run/dpdk/spdk_pid63041 00:30:06.732 Removing: /var/run/dpdk/spdk_pid63150 00:30:06.732 Removing: /var/run/dpdk/spdk_pid63249 00:30:06.732 Removing: /var/run/dpdk/spdk_pid63372 00:30:06.732 Removing: /var/run/dpdk/spdk_pid63476 00:30:06.732 Removing: /var/run/dpdk/spdk_pid63521 00:30:06.732 Removing: /var/run/dpdk/spdk_pid63558 00:30:06.732 Removing: /var/run/dpdk/spdk_pid63626 00:30:06.732 Removing: /var/run/dpdk/spdk_pid63756 00:30:06.732 Removing: /var/run/dpdk/spdk_pid64203 00:30:06.732 Removing: /var/run/dpdk/spdk_pid64278 00:30:06.732 Removing: /var/run/dpdk/spdk_pid64373 00:30:06.732 Removing: /var/run/dpdk/spdk_pid64390 00:30:06.732 Removing: /var/run/dpdk/spdk_pid64557 00:30:06.732 Removing: /var/run/dpdk/spdk_pid64584 00:30:06.732 Removing: /var/run/dpdk/spdk_pid64743 00:30:06.732 Removing: /var/run/dpdk/spdk_pid64765 00:30:06.732 Removing: /var/run/dpdk/spdk_pid64840 00:30:06.732 Removing: /var/run/dpdk/spdk_pid64863 00:30:06.732 Removing: /var/run/dpdk/spdk_pid64933 00:30:06.732 Removing: /var/run/dpdk/spdk_pid64955 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65162 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65199 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65284 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65372 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65414 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65492 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65544 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65591 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65637 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65689 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65741 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65788 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65840 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65892 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65943 00:30:06.732 Removing: /var/run/dpdk/spdk_pid65991 00:30:06.732 Removing: /var/run/dpdk/spdk_pid66043 00:30:06.732 Removing: /var/run/dpdk/spdk_pid66095 00:30:06.732 Removing: /var/run/dpdk/spdk_pid66136 00:30:06.732 Removing: /var/run/dpdk/spdk_pid66188 00:30:06.732 Removing: /var/run/dpdk/spdk_pid66240 00:30:06.732 Removing: /var/run/dpdk/spdk_pid66291 00:30:06.732 Removing: /var/run/dpdk/spdk_pid66342 00:30:06.732 Removing: /var/run/dpdk/spdk_pid66397 00:30:06.732 Removing: /var/run/dpdk/spdk_pid66449 00:30:06.732 Removing: /var/run/dpdk/spdk_pid66502 00:30:06.732 Removing: /var/run/dpdk/spdk_pid66590 00:30:06.732 Removing: /var/run/dpdk/spdk_pid66717 00:30:06.732 Removing: /var/run/dpdk/spdk_pid66896 00:30:06.732 Removing: /var/run/dpdk/spdk_pid66991 00:30:06.732 Removing: /var/run/dpdk/spdk_pid67039 00:30:06.732 Removing: /var/run/dpdk/spdk_pid67485 00:30:06.732 Removing: /var/run/dpdk/spdk_pid67594 00:30:06.732 Removing: /var/run/dpdk/spdk_pid67709 00:30:06.732 Removing: /var/run/dpdk/spdk_pid67773 00:30:06.732 Removing: /var/run/dpdk/spdk_pid67804 00:30:06.732 Removing: /var/run/dpdk/spdk_pid67880 00:30:06.732 Removing: /var/run/dpdk/spdk_pid68523 00:30:06.732 Removing: /var/run/dpdk/spdk_pid68576 00:30:06.732 Removing: /var/run/dpdk/spdk_pid69071 00:30:06.732 Removing: /var/run/dpdk/spdk_pid69181 00:30:06.732 Removing: /var/run/dpdk/spdk_pid69307 00:30:06.732 Removing: /var/run/dpdk/spdk_pid69365 00:30:06.732 Removing: /var/run/dpdk/spdk_pid69391 00:30:06.992 Removing: /var/run/dpdk/spdk_pid69422 00:30:06.992 Removing: /var/run/dpdk/spdk_pid71288 00:30:06.992 Removing: /var/run/dpdk/spdk_pid71442 00:30:06.992 Removing: /var/run/dpdk/spdk_pid71447 00:30:06.992 Removing: /var/run/dpdk/spdk_pid71463 00:30:06.992 Removing: /var/run/dpdk/spdk_pid71534 00:30:06.992 Removing: /var/run/dpdk/spdk_pid71543 00:30:06.992 Removing: /var/run/dpdk/spdk_pid71561 00:30:06.992 Removing: /var/run/dpdk/spdk_pid71600 00:30:06.992 Removing: /var/run/dpdk/spdk_pid71604 00:30:06.992 Removing: /var/run/dpdk/spdk_pid71616 00:30:06.992 Removing: /var/run/dpdk/spdk_pid71699 00:30:06.992 Removing: /var/run/dpdk/spdk_pid71703 00:30:06.992 Removing: /var/run/dpdk/spdk_pid71721 00:30:06.992 Removing: /var/run/dpdk/spdk_pid73145 00:30:06.992 Removing: /var/run/dpdk/spdk_pid73255 00:30:06.992 Removing: /var/run/dpdk/spdk_pid74662 00:30:06.992 Removing: /var/run/dpdk/spdk_pid76035 00:30:06.992 Removing: /var/run/dpdk/spdk_pid76154 00:30:06.992 Removing: /var/run/dpdk/spdk_pid76271 00:30:06.992 Removing: /var/run/dpdk/spdk_pid76394 00:30:06.992 Removing: /var/run/dpdk/spdk_pid76536 00:30:06.992 Removing: /var/run/dpdk/spdk_pid76622 00:30:06.992 Removing: /var/run/dpdk/spdk_pid76762 00:30:06.992 Removing: /var/run/dpdk/spdk_pid77138 00:30:06.992 Removing: /var/run/dpdk/spdk_pid77186 00:30:06.992 Removing: /var/run/dpdk/spdk_pid77648 00:30:06.992 Removing: /var/run/dpdk/spdk_pid77842 00:30:06.992 Removing: /var/run/dpdk/spdk_pid77950 00:30:06.992 Removing: /var/run/dpdk/spdk_pid78066 00:30:06.992 Removing: /var/run/dpdk/spdk_pid78125 00:30:06.992 Removing: /var/run/dpdk/spdk_pid78156 00:30:06.992 Removing: /var/run/dpdk/spdk_pid78476 00:30:06.992 Removing: /var/run/dpdk/spdk_pid78542 00:30:06.992 Removing: /var/run/dpdk/spdk_pid78634 00:30:06.992 Removing: /var/run/dpdk/spdk_pid79037 00:30:06.992 Removing: /var/run/dpdk/spdk_pid79185 00:30:06.992 Removing: /var/run/dpdk/spdk_pid79977 00:30:06.992 Removing: /var/run/dpdk/spdk_pid80122 00:30:06.992 Removing: /var/run/dpdk/spdk_pid80375 00:30:06.992 Removing: /var/run/dpdk/spdk_pid80480 00:30:06.992 Removing: /var/run/dpdk/spdk_pid80839 00:30:06.992 Removing: /var/run/dpdk/spdk_pid81093 00:30:06.992 Removing: /var/run/dpdk/spdk_pid81501 00:30:06.992 Removing: /var/run/dpdk/spdk_pid81743 00:30:06.992 Removing: /var/run/dpdk/spdk_pid81873 00:30:06.992 Removing: /var/run/dpdk/spdk_pid81937 00:30:06.992 Removing: /var/run/dpdk/spdk_pid82058 00:30:06.992 Removing: /var/run/dpdk/spdk_pid82100 00:30:06.992 Removing: /var/run/dpdk/spdk_pid82170 00:30:06.992 Removing: /var/run/dpdk/spdk_pid82353 00:30:06.992 Removing: /var/run/dpdk/spdk_pid82628 00:30:06.992 Removing: /var/run/dpdk/spdk_pid82982 00:30:06.992 Removing: /var/run/dpdk/spdk_pid83324 00:30:06.992 Removing: /var/run/dpdk/spdk_pid83704 00:30:06.992 Removing: /var/run/dpdk/spdk_pid84108 00:30:06.992 Removing: /var/run/dpdk/spdk_pid84246 00:30:06.992 Removing: /var/run/dpdk/spdk_pid84333 00:30:06.992 Removing: /var/run/dpdk/spdk_pid84834 00:30:06.992 Removing: /var/run/dpdk/spdk_pid84898 00:30:06.992 Removing: /var/run/dpdk/spdk_pid85294 00:30:06.992 Removing: /var/run/dpdk/spdk_pid85642 00:30:06.992 Removing: /var/run/dpdk/spdk_pid86046 00:30:06.992 Removing: /var/run/dpdk/spdk_pid86174 00:30:06.992 Removing: /var/run/dpdk/spdk_pid86232 00:30:06.992 Removing: /var/run/dpdk/spdk_pid86306 00:30:06.992 Removing: /var/run/dpdk/spdk_pid86373 00:30:06.992 Removing: /var/run/dpdk/spdk_pid86448 00:30:06.992 Removing: /var/run/dpdk/spdk_pid86682 00:30:06.992 Removing: /var/run/dpdk/spdk_pid86762 00:30:06.992 Removing: /var/run/dpdk/spdk_pid86829 00:30:06.992 Removing: /var/run/dpdk/spdk_pid86923 00:30:06.992 Removing: /var/run/dpdk/spdk_pid86959 00:30:06.992 Removing: /var/run/dpdk/spdk_pid87048 00:30:06.992 Removing: /var/run/dpdk/spdk_pid87183 00:30:06.992 Clean 00:30:07.251 15:25:45 -- common/autotest_common.sh@1451 -- # return 0 00:30:07.251 15:25:45 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:07.251 15:25:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:07.251 15:25:45 -- common/autotest_common.sh@10 -- # set +x 00:30:07.251 15:25:45 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:07.251 15:25:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:07.251 15:25:45 -- common/autotest_common.sh@10 -- # set +x 00:30:07.251 15:25:45 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:07.251 15:25:45 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:07.251 15:25:45 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:07.251 15:25:45 -- spdk/autotest.sh@391 -- # hash lcov 00:30:07.251 15:25:45 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:07.251 15:25:45 -- spdk/autotest.sh@393 -- # hostname 00:30:07.251 15:25:45 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:07.509 geninfo: WARNING: invalid characters removed from testname! 00:30:34.119 15:26:09 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:35.555 15:26:13 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:38.090 15:26:15 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:40.056 15:26:17 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:42.012 15:26:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:44.565 15:26:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:47.099 15:26:24 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:47.099 15:26:24 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:47.099 15:26:24 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:47.099 15:26:24 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.099 15:26:24 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.099 15:26:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.099 15:26:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.100 15:26:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.100 15:26:24 -- paths/export.sh@5 -- $ export PATH 00:30:47.100 15:26:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.100 15:26:24 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:47.100 15:26:24 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:47.100 15:26:24 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721057184.XXXXXX 00:30:47.100 15:26:24 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721057184.96QeZa 00:30:47.100 15:26:24 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:47.100 15:26:24 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:47.100 15:26:24 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:30:47.100 15:26:24 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:47.100 15:26:24 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:47.100 15:26:24 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:47.100 15:26:24 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:47.100 15:26:24 -- common/autotest_common.sh@10 -- $ set +x 00:30:47.100 15:26:24 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:30:47.100 15:26:24 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:47.100 15:26:24 -- pm/common@17 -- $ local monitor 00:30:47.100 15:26:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:47.100 15:26:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:47.100 15:26:24 -- pm/common@21 -- $ date +%s 00:30:47.100 15:26:24 -- pm/common@25 -- $ sleep 1 00:30:47.100 15:26:24 -- pm/common@21 -- $ date +%s 00:30:47.100 15:26:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721057184 00:30:47.100 15:26:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721057184 00:30:47.100 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721057184_collect-vmstat.pm.log 00:30:47.100 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721057184_collect-cpu-load.pm.log 00:30:48.035 15:26:25 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:48.036 15:26:25 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:30:48.036 15:26:25 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:30:48.036 15:26:25 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:48.036 15:26:25 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:48.036 15:26:25 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:48.036 15:26:25 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:48.036 15:26:25 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:48.036 15:26:25 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:48.036 15:26:26 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:48.036 15:26:26 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:48.036 15:26:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:48.036 15:26:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:48.036 15:26:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:48.036 15:26:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:48.036 15:26:26 -- pm/common@44 -- $ pid=88886 00:30:48.036 15:26:26 -- pm/common@50 -- $ kill -TERM 88886 00:30:48.036 15:26:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:48.036 15:26:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:48.036 15:26:26 -- pm/common@44 -- $ pid=88888 00:30:48.036 15:26:26 -- pm/common@50 -- $ kill -TERM 88888 00:30:48.036 + [[ -n 5361 ]] 00:30:48.036 + sudo kill 5361 00:30:48.046 [Pipeline] } 00:30:48.069 [Pipeline] // timeout 00:30:48.077 [Pipeline] } 00:30:48.100 [Pipeline] // stage 00:30:48.107 [Pipeline] } 00:30:48.125 [Pipeline] // catchError 00:30:48.136 [Pipeline] stage 00:30:48.139 [Pipeline] { (Stop VM) 00:30:48.155 [Pipeline] sh 00:30:48.436 + vagrant halt 00:30:51.720 ==> default: Halting domain... 00:30:58.284 [Pipeline] sh 00:30:58.567 + vagrant destroy -f 00:31:01.860 ==> default: Removing domain... 00:31:02.130 [Pipeline] sh 00:31:02.409 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:31:02.419 [Pipeline] } 00:31:02.437 [Pipeline] // stage 00:31:02.442 [Pipeline] } 00:31:02.455 [Pipeline] // dir 00:31:02.462 [Pipeline] } 00:31:02.481 [Pipeline] // wrap 00:31:02.486 [Pipeline] } 00:31:02.500 [Pipeline] // catchError 00:31:02.512 [Pipeline] stage 00:31:02.515 [Pipeline] { (Epilogue) 00:31:02.530 [Pipeline] sh 00:31:02.803 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:09.375 [Pipeline] catchError 00:31:09.376 [Pipeline] { 00:31:09.389 [Pipeline] sh 00:31:09.693 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:09.693 Artifacts sizes are good 00:31:09.703 [Pipeline] } 00:31:09.718 [Pipeline] // catchError 00:31:09.726 [Pipeline] archiveArtifacts 00:31:09.731 Archiving artifacts 00:31:09.868 [Pipeline] cleanWs 00:31:09.877 [WS-CLEANUP] Deleting project workspace... 00:31:09.877 [WS-CLEANUP] Deferred wipeout is used... 00:31:09.883 [WS-CLEANUP] done 00:31:09.885 [Pipeline] } 00:31:09.903 [Pipeline] // stage 00:31:09.908 [Pipeline] } 00:31:09.925 [Pipeline] // node 00:31:09.931 [Pipeline] End of Pipeline 00:31:10.073 Finished: SUCCESS